Skip to main content

Improving Fairness via Deep Ensemble Framework Using Preprocessing Interventions

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14050))

Included in the following conference series:

  • 1307 Accesses

Abstract

In recent years, automated decision making has grown as a consequence of wide use of machine learning models in real-world applications. Biases inherited in these algorithms might eventually result in discrimination and can significantly impact people’s lives. Increased social awareness and ethical concerns in the human-centered AI community has led to responsible AI to address issues such as fairness in ML models. In this paper we propose an ensemble framework of deep learning models to improve fairness. We propose four different sampling strategies to analyze the impact of different ensemble strategies. Through experiments on two real-world datasets, we show our proposed framework achieves higher fairness than several benchmark models with minimal compromise of accuracy. Also our experiments show that a standard ensemble model without any fairness constraint does not remove bias and a proper design is necessary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The scores of this paper was extracted and presented in [50].

References

  1. Dua, D., Graff, C.: UCI machine learning repository. (University of California, Irvine, School of Information, 2017). http://archive.ics.uci.edu/ml

  2. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33, 1–33 (2012)

    Article  Google Scholar 

  3. Bhaskaruni, D., Hu, H., Lan, C.: Improving prediction fairness via model ensemble. In: 2019 IEEE 31st International Conference On Tools With Artificial Intelligence (ICTAI), pp. 1810–1814 (2019)

    Google Scholar 

  4. Grgić-Hlača, N., Zafar, M., Gummadi, K., Weller, A.: On fairness, diversity and randomness in algorithmic decision making. ArXiv Preprint ArXiv:1706.10208 (2017)

  5. Tayebi, A., et al.: UnbiasedDTI: mitigating real-world bias of drug-target interaction prediction by using deep ensemble-balanced learning. Molecules 27, 2980 (2022)

    Article  Google Scholar 

  6. Rajabi, A., Garibay, O.: Tabfairgan: fair tabular data generation with generative adversarial networks. Mach. Learn. Knowl. Extract. 4, 488–501 (2022)

    Article  Google Scholar 

  7. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  8. Verma, S., Rubin, J.: Fairness definitions explained. 2018 IEEE/ACM International Workshop On Software Fairness (fairware), pp. 1–7 (2018)

    Google Scholar 

  9. Calmon, F., Wei, D., Vinzamuri, B., Natesan Ramamurthy, K., Varshney, K.: Optimized pre-processing for discrimination prevention. In: Advances In Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  10. Iosifidis, V., Ntoutsi, E.: Dealing with bias via data augmentation in supervised learning scenarios. Jo Bates Paul D. Clough Robert Jäschke. 24 (2018)

    Google Scholar 

  11. Zhang, L., Wu, X.: Anti-discrimination learning: a causal modeling-based framework. Int. J. Data Sci. Anal. 4(1), 1–16 (2017). https://doi.org/10.1007/s41060-017-0058-x

    Article  MathSciNet  Google Scholar 

  12. Luong, B., Ruggieri, S., Turini, F.: k-NN as an implementation of situation testing for discrimination discovery and prevention. In: Proceedings of the 17th ACM SIGKDD International Conference On Knowledge Discovery And Data Mining, pp. 502–510 (2011)

    Google Scholar 

  13. Feldman, M., Friedler, S., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, pp. 259–268 (2015)

    Google Scholar 

  14. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. International Conference On Machine Learning, pp. 325–333 (2013)

    Google Scholar 

  15. Zafar, M., Valera, I., Gomez Rodriguez, M., Gummadi, K.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference On World Wide Web, pp. 1171–1180 (2017)

    Google Scholar 

  16. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  17. Zafar, M., Valera, I., Rogriguez, M., Gummadi, K.: Fairness constraints: mechanisms for fair classification. Artif. Intell. Statist., 962–970 (2017)

    Google Scholar 

  18. Kamiran, F., Calders, T., Pechenizkiy, M. Discrimination aware decision tree learning. In: 2010 IEEE International Conference On Data Mining, 869–874 (2010)

    Google Scholar 

  19. Fish, B., Kun, J., Lelkes, Á.: A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM International Conference On Data Mining, pp. 144–152 (2016)

    Google Scholar 

  20. Pedreschi, D., Ruggieri, S., Turini, F.: Measuring discrimination in socially-sensitive decision records. In: Proceedings of the 2009 SIAM International Conference On Data Mining, pp. 581–592 (2009)

    Google Scholar 

  21. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Joint European Conference On Machine Learning And Knowledge Discovery In Databases, pp. 35–50 (2012)

    Google Scholar 

  22. Mehrabi, N., Gupta, U., Morstatter, F., Steeg, G., Galstyan, A.: Attributing fair decisions with attention interventions. ArXiv Preprint ArXiv:2109.03952 (2021)

  23. Gupta, U., Ferber, A., Dilkina, B., Ver Steeg, G.: Controllable guarantees for fair outcomes via contrastive information estimation. In: Proceedings of the AAAI Conference On Artificial Intelligence, vol. 35, pp. 7610–7619 (2021)

    Google Scholar 

  24. Moyer, D., Gao, S., Brekelmans, R., Galstyan, A., Ver Steeg, G. Invariant representations without adversarial training. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  25. Kraskov, A., Stögbauer, H., Grassberger, P.: Estimating mutual information. Phys. Rev. E 69, 066138 (2004)

    Article  MathSciNet  Google Scholar 

  26. Chawla, N., Bowyer, K., Hall, L., Kegelmeyer, W.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    Article  MATH  Google Scholar 

  27. Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference On Computer, Control and Communication, pp. 1–6 (2009)

    Google Scholar 

  28. Friedler, S., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 329–338 (2019)

    Google Scholar 

  29. Yang, K., Huang, B., Stoyanovich, J., Schelter, S.: Fairness-aware instrumentation of preprocessing undefined pipelines for machine learning. Workshop On Human-In-the-Loop Data Analytics (HILDA’20) (2020)

    Google Scholar 

  30. Zhou, Y., Kantarcioglu, M., Clifton, C.: Improving fairness of AI systems with lossless de-biasing. ArXiv Preprint ArXiv:2105.04534 (2021)

  31. Delobelle, P., Temple, P., Perrouin, G., Frénay, B., Heymans, P., Berendt, B.: Ethical adversaries: towards mitigating unfairness with adversarial machine learning. ACM SIGKDD Explor. Newslett. 23, 32–41 (2021)

    Article  Google Scholar 

  32. Pessach, D., Shmueli, E.: Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings. Expert Syst. Appl. 185, 115667 (2021)

    Article  Google Scholar 

  33. Pessach, D., Shmueli, E.: A review on fairness in machine learning. ACM Comput. Surv. (CSUR) 55, 1–44 (2022)

    Article  Google Scholar 

  34. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5, 153–163 (2017)

    Article  Google Scholar 

  35. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54, 1–35 (2021)

    Article  Google Scholar 

  36. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. Ethics Of Data And Analytics, pp. 254–264 (2016)

    Google Scholar 

  37. Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Manage. Sci. 65, 2966–2981 (2019)

    Article  Google Scholar 

  38. Datta, A., Tschantz, M., Datta, A.: Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination. ArXiv Preprint ArXiv:1408.6491 (2014)

  39. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. Ethics Of Data And Analytics, pp. 296–299 (2018)

    Google Scholar 

  40. Barocas, S., Selbst, A.: Big data’s disparate impact. California Law Review, pp. 671–732 (2016)

    Google Scholar 

  41. Pessach, D., Shmueli, E.: Algorithmic fairness. ArXiv Preprint ArXiv:2001.09784 (2020)

  42. Kenfack, P., Khan, A., Kazmi, S., Hussain, R., Oracevic, A., Khattak, A.: Impact of model ensemble on the fairness of classifiers in machine learning. In: 2021 International Conference On Applied Artificial Intelligence (ICAPAI), pp. 1–6 (2021)

    Google Scholar 

  43. Sagi, O.: Rokach, L.: Ensemble learning: a survey. Wiley Interdisc. Rev.: Data Mining Knowl. Discov. 8, e1249 (2018)

    Google Scholar 

  44. Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F.: A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. In: IEEE Trans. Syst. Man Cybernet., Part C (Applications And Reviews) 42, 463–484 (2011)

    Google Scholar 

  45. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. ArXiv Preprint ArXiv:1609.05807 (2016)

  46. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44 (2021)

    Article  MathSciNet  Google Scholar 

  47. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., Wallach, H.: Improving fairness in machine learning systems: what do industry practitioners need?. In: Proceedings of the 2019 CHI Conference On Human Factors in Computing Systems, pp. 1–16 (2019)

    Google Scholar 

  48. Lee, M., et al.: Human-centered approaches to fair and responsible AI. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–8 (2020)

    Google Scholar 

  49. Riedl, M.: Human-centered artificial intelligence and machine learning. Hum Behav. Emerg. Technol. 1, 33–36 (2019)

    Article  Google Scholar 

  50. Raff, E., Sylvester, J.: Gradient reversal against discrimination: a fair neural network learning approach. In: 2018 IEEE 5th International Conference On Data Science and Advanced Analytics (DSAA), pp. 189–198 (2018)

    Google Scholar 

  51. Kamishima, T., Akaho, S., Sakuma, J.: Fairness-aware learning through regularization approach. In: 2011 IEEE 11th International Conference On Data Mining Workshops, pp. 643–650 (2011)

    Google Scholar 

  52. Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.: The variational fair autoencoder. ArXiv Preprint ArXiv:1511.00830 (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ozlem Ozmen Garibay .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tayebi, A., Garibay, O.O. (2023). Improving Fairness via Deep Ensemble Framework Using Preprocessing Interventions. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics