Skip to main content
Log in

A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions

  • Review article
  • Published:
Archives of Computational Methods in Engineering Aims and scope Submit manuscript

Abstract

This comprehensive review and analysis delve into the intricate facets of bias within the realm of deep learning. As artificial intelligence and machine learning technologies become increasingly integrated into our lives, understanding and mitigating bias in these systems is of paramount importance. This paper scrutinizes the multifaceted nature of bias, encompassing data bias, algorithmic bias, and societal bias, and explores the interconnectedness among these dimensions. Through an exploration of existing literature and recent advancements in the field, this paper offers a critical assessment of various bias mitigation techniques. It examines the challenges faced in addressing bias and emphasizes the need for an intersectional and inclusive approach to effectively rectify disparities. Furthermore, this review underscores the importance of ethical considerations in the development and deployment of deep learning models. It highlights the necessity of diverse representation in data, fairness-aware algorithms, and interpretability as key elements in creating bias-free AI systems. By synthesizing existing research and providing a holistic overview of bias in deep learning, this paper aims to contribute to the ongoing discourse on mitigating bias and fostering equity in artificial intelligence systems. The insights presented herein can serve as a foundation for future research and as a guide for practitioners, policymakers, and stakeholders to navigate the complex landscape of bias in deep learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

Not applicable.

References

  1. Ferrara E (2023) The butterfly effect in artificial intelligence systems: implications for AI bias and fairness, machine learning with applications. 15(September):100525, 2024, https://doi.org/10.1016/j.mlwa.2024.100525

  2. Yang L (2024) Characteristics of datasets for fake news detection to mitigate domain bias. Inform Eng Express 10(1):1. https://doi.org/10.52731/iee.v10.i1.786

    Article  Google Scholar 

  3. Giloni A et al (2024) BENN: bias estimation using a deep neural network. IEEE Trans Neural Networks Learn Syst 35(1):117–131. https://doi.org/10.1109/TNNLS.2022.3172365

    Article  Google Scholar 

  4. .Teo CTH, Abdollahzadeh M, Cheung NM (2024) FairTL: a transfer learning approach for bias mitigation in deep generative models. IEEE Journal on Selected Topics in Signal Processing PP(XX):1–13. https://doi.org/10.1109/JSTSP.2024.3363419

  5. Liu H, Sheng M, Sun Z, Yao Y, Hua XS, Shen HT (2024) Learning with Imbalanced noisy data by preventing bias in sample selection. IEEE Trans Multimedia PP:1–12. https://doi.org/10.1109/TMM.2024.3368910

    Article  Google Scholar 

  6. Dominguez-Catena I, Paternain D, Galar M, Intelligence (2024) Metrics for dataset demographic bias: a case study on facial expression recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence PP:1–18 https://doi.org/10.1109/TPAMI.2024.3361979

  7. Kulkarni A, Balachandran V, Divakaran DM, Das T (2024) MitigaTing Bias In Machine Learning Models For Phishing Webpage Detection. 2024 16th International Conference on COMmunication Systems and NETworkS, COMSNETS 2024, Ml:430–432, https://doi.org/10.1109/COMSNETS59351.2024.10427170

  8. Zhang X, Zhang F, Xu C (2024) NExT-OOD: overcoming dual multiple-choice VQA biases. IEEE Trans Pattern Anal Mach Intell 46(4):1913–1931. https://doi.org/10.1109/TPAMI.2023.3269429

    Article  Google Scholar 

  9. Rattanaphan S, Briassouli A (2024) Evaluating generalization, bias, and fairness in deep learning for metal surface defect detection: a comparative study. Processes 12(3):456. https://doi.org/10.3390/pr12030456

    Article  Google Scholar 

  10. Openja M, Laberge G, Khomh F (2024) Detection and evaluation of bias-inducing features in machine learning. 29(1). https://doi.org/10.1007/s10664-023-10409-5

  11. Dogra V, Verma S, Kavita M, Wozniak J, Shafi, Ijaz MF (2024) Shortcut learning explanations for deep natural language processing: a survey on dataset biases. IEEE Access 12(January):26183–26195, https://doi.org/10.1109/ACCESS.2024.3360306

  12. Chen Z, Zhang JM, Sarro F, Harman M (2023) A comprehensive empirical study of bias mitigation methods for machine learning classifiers. ACM Trans Softw Eng Methodol 32(4). https://doi.org/10.1145/3583561

  13. Hort M, Chen Z, Zhang JM, Harman M, Sarro F (2023) Bias mitigation for machine learning classifiers: a comprehensive survey. ACM J Responsible Comput. https://doi.org/10.1145/3631326

    Article  Google Scholar 

  14. Pagano TP et al (2023) Bias and unfairness in machine learning models: a systematic review on datasets, tools, fairness metrics, and identification and mitigation methods. Big Data Cogn Comput 7(1):1–31. https://doi.org/10.3390/bdcc7010015

    Article  Google Scholar 

  15. Siddique S, Haque MA, George R, Gupta KD, Gupta D, Faruk MJH (2023) Survey on machine learning biases and mitigation techniques. Digital 4(1):1–68. https://doi.org/10.3390/digital4010001

    Article  Google Scholar 

  16. Miller RJH et al (2023) Mitigating bias in deep learning for diagnosis of coronary artery disease from myocardial perfusion SPECT images. Eur J Nucl Med Mol Imaging 50(2):387–397. https://doi.org/10.1007/s00259-022-05972-w

    Article  Google Scholar 

  17. Bai M et al (2023) The uncovered biases and errors in clinical determination of bone age by using deep learning models. Eur Radiol 33(5):3544–3556. https://doi.org/10.1007/s00330-022-09330-0

    Article  Google Scholar 

  18. Guo Y, Nie L, Cheng H, Cheng Z, Kankanhalli M, Bimbo AD (2023) On modality bias recognition and reduction. ACM Trans Multimedia Comput Commun Appl 19(3):1–22. https://doi.org/10.1145/3565266

    Article  Google Scholar 

  19. Donald A et al (2023) Bias detection for customer interaction data: a survey on datasets, methods, and tools. IEEE Access 11(March):53703–53715, https://doi.org/10.1109/ACCESS.2023.3276757

  20. Lin Z, Liu D, Pan W, Yang Q, Ming Z (2023) Transfer learning for collaborative recommendation with biased and unbiased data. Artif Intell 324:103992. https://doi.org/10.1016/j.artint.2023.103992

    Article  MathSciNet  Google Scholar 

  21. Baracaldo N et al (2023) Benchmarking the effect of poisoning defenses on the security and bias of deep learning models. 2023 IEEE security and privacy Workshops (SPW) 45–56. https://doi.org/10.1109/SPW59333.2023.00010

  22. Gilg J, Herzog F (2023) The box size confidence bias harms your object detector. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 1471–1480, https://doi.org/10.1109/WACV56688.2023.00152

  23. Nagpal S, Singh M, Singh R (2023) In-group bias in deep learning-based face recognition models due to ethnicity and age. IEEE Trans Technol Soc 4(1):54–67. https://doi.org/10.1109/TTS.2023.3241010

    Article  Google Scholar 

  24. Kim JW, Kim SY, Sohn KA (2023) Dataset bias prediction for few-shot image classification. Electron (Switzerland) 12(11):1–14. https://doi.org/10.3390/electronics12112470

    Article  Google Scholar 

  25. Dreyer M, Samek W, Nov CV (2023) Revealing hidden context bias in segmentation and object detection through concept-specific explanations. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 3829–3839

  26. Wehrli S, Hertweck C, Amirian M, Glüge S, Stadelmann T (2022) Bias, awareness, and ignorance in deep– learning– based face recognition. AI Ethics 2(3):509–522. https://doi.org/10.1007/s43681-021-00108-6

    Article  Google Scholar 

  27. Mohammad C, Haider R, Clifton C, Metrics AF, Unfair AI (2022) It isn’t just biased data, 2022 IEEE International Conference on Data Mining (ICDM) (Icdm):957–962. https://doi.org/10.1109/ICDM54844.2022.00114

  28. Gwyn T, Roy K (2022) Examining gender bias of convolutional neural networks via facial recognition. future internet 14(12). https://doi.org/10.3390/fi14120375

  29. Sourlos N, Wang J, Nagaraj Y, van Ooijen P, Vliegenthart R (2022) Possible bias in supervised deep learning algorithms for CT lung nodule detection and classification. Cancers 14(16):1–15. https://doi.org/10.3390/cancers14163867

    Article  Google Scholar 

  30. Salman H, Jain S, Ilyas A, Engstrom L, Wong E, Madry A (2022) When does bias transfer in transfer learning? arxiv. Available: http://arxiv.org/abs/2207.02842

  31. Hall M, van der Maaten L, Gustafson L, Jones M, Adcock A (2022) A systematic study of bias amplification, computer vision and pattern recognition. 1(1). Available: http://arxiv.org/abs/2201.11706

  32. Bashar MA, Nayak R, Kothare A, Sharma V, Kandadai K (2021) Deep learning for bias detection: from inception to deployment. Commun Comput Inform Sci 1504 CCIS:86–101. https://doi.org/10.1007/978-981-16-8531-6_7

    Article  Google Scholar 

  33. Cha D, Pae C, Lee SA, Na G, Hur YK, Young H (2021) Differential biases and variabilities of deep learning– based artificial intelligence and human experts in clinical diagnosis: retrospective cohort and survey study corresponding author. JMIR Med Inf 9:1–12. https://doi.org/10.2196/33049

    Article  Google Scholar 

  34. Schmid M (2021) Bias in cross-entropy-based training of deep survival networks. IEEE Trans Pattern Anal Mach Intell 43(9):3126–3137. https://doi.org/10.1109/TPAMI.2020.2979450

    Article  Google Scholar 

  35. Shen X, Plested J (2021) Exploring biases and prejudice of facial synthesis via Semantic Latent Space. 2021 Int Joint Conf Neural Networks (IJCNN) 1–8. https://doi.org/10.1109/IJCNN52387.2021.9534287

  36. Schaaf N, de Mitri O, Kim HB, Windberger A, Huber MF (2021) Towards measuring bias in image classification, lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics. 12893 LNCS(036):433–445, https://doi.org/10.1007/978-3-030-86365-4_35

  37. Krishnakumar A, Prabhu V, Sudhakar S, Hoffman J (2021) UDIS: Unsupervised discovery of bias in deep visual recognition models, computer vision and pattern recognition. 1–15. Available: http://arxiv.org/abs/2110.15499

  38. He Q, Hou X, WD3 (2020) Taming the estimation bias in deep reinforcement learning. 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI) WD3:, 391–398. https://doi.org/10.1109/ICTAI50040.2020.00068

  39. Serna I, Peña A, Morales A, Fierrez J (2020) Insidebias: measuring bias in deep networks and application to face gender biometrics. Proc - Int Conf Pattern Recognit 3720–3727. https://doi.org/10.1109/ICPR48806.2021.9412443

  40. Wang Z et al (2020) Towards fairness in visual recognition: effective strategies for bias mitigation. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 8916–8925. https://doi.org/10.1109/CVPR42600.2020.00894

  41. Kortylewski A, Morel-forster A (2019) Analyzing and reducing the damage of dataset bias to face recognition with synthetic data. 2019 IEEE/CVF Conf Comput Vis Pattern Recognit Workshops 2261–2268. https://doi.org/10.1109/CVPRW.2019.00279

  42. Guo L, Lei Y, Xing S, Yan T, Li N (2019) Deep convolutional transfer learning network: a new method for intelligent fault diagnosis of machines with unlabeled data. IEEE Trans Industr Electron 66(9):7316–7325. https://doi.org/10.1109/TIE.2018.2877090

    Article  Google Scholar 

  43. Wang T, Zhao J, Yatskar M, Chang K.W., Ordonez V (2019) Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations. Proc IEEE Int Conf Comput Vis 2019–Octob(no Iccv):5309–5318. https://doi.org/10.1109/ICCV.2019.00541

    Article  Google Scholar 

  44. Fard S F., Hollensen P, Mcilory S, Trappenberg T (2017) Impact of biased mislabeling on learning with deep networks. 2017 Int Joint Conf Neural Networks (IJCNN) 2652–2657. https://doi.org/10.1109/IJCNN.2017.7966180

Download references

Funding

No funding was received.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Milind Shah.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

We would like to emphasize that this research did not involve the use of human or animal subjects.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shah, M., Sureja, N. A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions. Arch Computat Methods Eng (2024). https://doi.org/10.1007/s11831-024-10134-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11831-024-10134-2

Navigation