Skip to main content

Advertisement

Log in

Exploring differences in ethical decision-making processes between humans and ChatGPT-3 model: a study of trade-offs

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

This study aimed to investigate the decision-making abilities of ChatGPT-3, an artificial intelligence (AI) system, in ethical dilemmas, with a focus on understanding the implications of ethical considerations in AI development. Utilizing a within-subject design, participants were presented with three ethical dilemmas, each involving conflicts between various values. These scenarios were also run through ChatGPT-3 for comparison. Notable differences were found between the decision-making processes of humans and ChatGPT-3, especially in situations where moral choices were distinctly labeled as good or evil and directed towards a promising outcome. The implications of these findings are significant, offering insights to inform AI development by stressing the importance of incorporating ethical theories and processes in AI systems. This is a crucial step towards the goal of designing AI models that are more ethically aware. Furthermore, this study draws attention to the ethical impacts of AI usage, aiding policymakers and regulators in making educated decisions about the part AI should play in decision-making and regulation in ethically important contexts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Amit, E., Greene, J.D.: You see, the ends don’t justify the means: Visual imagery and moral judgment. Psychol. Sci. 23(8), 861–868 (2012)

    Article  Google Scholar 

  2. Bandura, A.: Toward a psychology of human agency—Albert Bandura, 2006.https://journals.sagepub.com/doi/abs/https://doi.org/10.1111/j.1745-6916.2006.00011.x?journalCode=ppsa (2006)

  3. Bartels, D.M.: Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition 108(2), 381–417 (2008)

    Article  Google Scholar 

  4. Cameron, J., Pierce, W.D.: Reinforcement, reward, and intrinsic motivation: a meta-analysis. Rev. Educ. Res. 64(3), 363–423 (1994)

    Article  Google Scholar 

  5. Chan, A.: GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry. AI Ethics. 3, 1–12 (2022)

    Google Scholar 

  6. Crockett, M.J., Clark, L., Hauser, M.D., Robbins, T.W.: Serotonin selectively influences moral judgment and behavior through effects on harm aversion. Proc. Natl. Acad. Sci. 107(40), 17433–17438 (2010)

    Article  Google Scholar 

  7. Dale, R.: GPT-3: What’s it good for? Nat. Lang. Eng. 27(1), 113–118 (2021)

    Article  Google Scholar 

  8. de Vries, M., Holland, R.W., Witteman, C.L.M.: Fitting decisions: mood and intuitive versus deliberative decision strategies. Cogn. Emot. 22(5), 931–943 (2008). https://doi.org/10.1080/02699930701552580

    Article  Google Scholar 

  9. Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.M., Koohang, A., Raghavan, V., Ahuja, M.: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manage. 71, 102642 (2023)

    Article  Google Scholar 

  10. Edwards, A.L.: Note on the “correction for continuity” in testing the significance of the difference between correlated proportions. Psychometrika 13(3), 185–187 (1948)

    Article  Google Scholar 

  11. Floridi, L., Chiriatti, M.: GPT-3: Its nature, scope, limits, and consequences. Mind. Mach. 30, 681–694 (2020)

    Article  Google Scholar 

  12. Foot, P.: The problem of abortion and the doctrine of the double effect. Oxford Rev. 5, 5–15 (1967)

    Google Scholar 

  13. Gawronski, B., Beer, J.S.: What makes moral dilemma judgments “utilitarian” or “deontological”? Soc. Neurosci. 12(6), 626–632 (2017)

    Google Scholar 

  14. Greene, J.D.: Dual-process morality and the personal/impersonal distinction: a reply to McGuire, Langdon, Coltheart, and Mackenzie. J. Exp. Soc. Psychol. 45(3), 581–584 (2009). https://doi.org/10.1016/j.jesp.2009.01.003

    Article  Google Scholar 

  15. Greene, J.D.: The dual-process theory of moral judgment does not deny that people can make compromise judgments. Proc. Natl. Acad. Sci. 120(6), e2220396120 (2023)

    Article  Google Scholar 

  16. Greene, J., Haidt, J.: How (and where) does moral judgment work? Trends Cogn. Sci. 6(12), 517–523 (2002). https://doi.org/10.1016/S1364-6613(02)02011-9

    Article  Google Scholar 

  17. Heaven, W.D.: OpenAI’s new language generator GPT-3 is shockingly good—And completely mindless. MIT Technol. Rev. (2020)

  18. Kohlberg, L.: Stages of moral development. Moral Educ. 1(51), 23–92 (1971)

    Article  Google Scholar 

  19. Lapsley, D.K., Hill, P.L.: On dual processing and heuristic approaches to moral cognition. J. Moral Educ. 37(3), 313–332 (2008)

    Article  Google Scholar 

  20. McNemar, Q.: Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2), 153–157 (1947)

    Article  Google Scholar 

  21. Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., Floridi, L.: Ethics as a service: a pragmatic operationalisation of AI ethics. Mind. Mach. 31(2), 239–256 (2021). https://doi.org/10.1007/s11023-021-09563-w

    Article  Google Scholar 

  22. Riedl, M.O.: Human-centered artificial intelligence and machine learning. Hum. Behav. Emerg. Technol. 1(1), 33–36 (2019)

    Article  Google Scholar 

  23. Shah, M.U., Rehman, U., Iqbal, F., Hussain, M., Wahid, F.: An alternate account on the ethical implications of autonomous vehicles. 2021 17th international conference on intelligent environments (IE), 1–5. (2017)

  24. Shah, M.U., Rehman, U., Iqbal, F., Ilahi, H.: Exploring the human factors in moral dilemmas of autonomous vehicles. Pers. Ubiquit. Comput. 26(5), 1321–1331 (2022)

    Article  Google Scholar 

  25. Stokel-Walker, C., Van Noorden, R.: What ChatGPT and generative AI mean for science. Nature 614(7947), 214–216 (2023). https://doi.org/10.1038/d41586-023-00340-6

    Article  Google Scholar 

  26. Thomson, J.J.: Killing, letting die, and the trolley problem. Monist 59(2), 204–217 (1976)

    Article  Google Scholar 

  27. Thomson, J.J.: The trolley problem. Yale LJ 94, 1395 (1984)

    Article  Google Scholar 

  28. Zhong, C.-B.: The ethical dangers of deliberative decision making. Adm. Sci. Q. 56(1), 1–25 (2011)

    Article  Google Scholar 

  29. Beccalli, E., Elliot, V., Virili, F.: Artificial intelligence and ethics in portfolio management. In: Digital business transformation: organizing managing and controlling in the information age, pp. 19–30. Springer, Berlin (2020)

    Chapter  Google Scholar 

  30. Bosma, M.: Introducing FLAN: more generalizable language models with instruction fine-tuning. retrieved on July, 11, 2013 from https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html (2021)

  31. Conitzer, V., Sinnott-Armstrong, W., Borg, J.S., Deng, Y., Kramer, M.: Moral decision making frameworks for artificial intelligence. Proc. AAAI Conf. Artif. Intell. (2017). https://doi.org/10.1609/aaai.v31i1.11140

    Article  Google Scholar 

  32. Craft, J.L.: A review of the empirical ethical decision-making literature: 2004–2011. J. Bus. Ethics 117(2), 221–259 (2013). https://doi.org/10.1007/s10551-012-1518-9

    Article  Google Scholar 

  33. Gillies, A., Smith, P.: Can AI systems meet the ethical requirements of professional decision-making in health care? AI and Ethics 2(1), 41–47 (2022)

    Article  Google Scholar 

  34. Gordon, R.: MIT researchers make language models scalable self-learners. Retrieved on July 9, 2023 from https://www.csail.mit.edu/news/mit-researchers-make-language-models-scalable-self-learners (2023)

  35. Hu, K.: ChatGPT sets record for fastest-growing user base - analyst note. Retrieved on Jul 16, 2023 from https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (2023)

  36. Lyu, Q., Tan, J., Zapadka, M.E., Ponnatapuram, J., Niu, C., Wang, G., Whitlow, C.T.: Translating radiology reports into plain language using chatgpt and gpt-4 with prompt learning: promising results, limitations, and potential. arXiv preprint arXiv:2303.09038. (2023)

  37. Open AI (2023). Transforming work and creativity with AI. Retrieved on Jul 16, 2023 from https://openai.com/product

  38. Shah, M.U., Rehman, U., Parmar, B., Ismail, I.: Effects of moral violation on algorithmic transparency: an empirical investigation. J. Bus. Ethics (2023). https://doi.org/10.1007/s10551-023-05472-3

    Article  Google Scholar 

  39. Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., Le, Q.: Lamda: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)

  40. von Eschenbach, W.J.: Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34(4), 1607–1622 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Umair Rehman.

Ethics declarations

Conflict of interest

No conflicts.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rehman, U., Iqbal, F. & Shah, M.U. Exploring differences in ethical decision-making processes between humans and ChatGPT-3 model: a study of trade-offs. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00335-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-023-00335-z

Keywords

Navigation