Skip to main navigation menu Skip to main content Skip to site footer

The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT

Abstract

Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business. This study sought to develop a conceptual framework that gathers the negative aspects of GAI development in management and economics, with a focus on ChatGPT.

Research Design & Methods: The study employed a narrative and critical literature review and developed a conceptual framework based on prior literature. We used a line of deductive reasoning in formulating our theoretical framework to make the study’s overall structure rational and productive. Therefore, this article should be viewed as a conceptual article that highlights the controversies and threats of GAI in management and economics, with ChatGPT as a case study.

Findings: Based on the conducted deep and extensive query of academic literature on the subject as well as professional press and Internet portals, we identified various controversies, threats, defects, and disadvantages of GAI, in particular ChatGPT. Next, we grouped the identified threats into clusters to summarize the seven main threats we see. In our opinion they are as follows: (i) no regulation of the AI market and urgent need for regulation, (ii) poor quality, lack of quality control, disinformation, deepfake content, algorithmic bias, (iii) automation-spurred job losses, (iv) personal data violation, social surveillance, and privacy violation, (v) social manipulation, weakening ethics and goodwill, (vi) widening socio-economic inequalities, and (vii) AI technostress.

Implications & Recommendations: It is important to regulate the AI/GAI market. Advocating for the regulation of the AI market is crucial to ensure a level playing field, promote fair competition, protect intellectual property rights and privacy, and prevent potential geopolitical risks. The changing job market requires workers to continuously acquire new (digital) skills through education and retraining. As the training of AI systems becomes a prominent job category, it is important to adapt and take advantage of new opportunities. To mitigate the risks related to personal data violation, social surveillance, and privacy violation, GAI developers must prioritize ethical considerations and work to develop systems that prioritize user privacy and security. To avoid social manipulation and weaken ethics and goodwill, it is important to implement responsible AI practices and ethical guidelines: transparency in data usage, bias mitigation techniques, and monitoring of generated content for harmful or misleading information.

Contribution & Value Added: This article may aid in bringing attention to the significance of resolving the ethical and legal considerations that arise from the use of GAI and ChatGPT by drawing attention to the controversies and hazards associated with these technologies.

Keywords

artificial intelligence (AI), generative artificial intelligence (GAI), ChatGPT, technology adoption, digital transformation, OpenAI, chatbot industry, technostress

(PDF) Save

Author Biography

Krzysztof Wach

Professor at Krakow University of Economics, Department of International Trade. 

Profesor of Social Sciences (2020), Habilitated doctor of economics (DEcon) within the specialisation of international entrepreneurship (2013), PhD in management within the specialisation of small business management (2006), master in international economics (MIEcon) within the specialisation of international trade (2001)

his research interests include international entrepreneurship, international business, European businesses, family firms.

visiting professor in various universities, including Grand Valley State University (Grand Rapids, USA), Roosevelt University (Chicago, USA), University of Detroit Mercy (Detroit, USA), Loyola University Chicago (Chicago, USA), Northumbria University (Newcastle, UK), Technical University of Cartagena (Spain)

member of international scientific societies, including: Academy of International Business (AIB), European International Business Academy (EIBA), Strategic Management Society (SMS), International Council for Small Business (ICSB), United States Association for Small Business and Entrepreneurship (USASBE), Entrepreneurship Research and Education Network of Central European Universities (ERENET)

the editor-in-chief of the scientific quarterly 'Entrepreneurial Business and Economics Review' EBER (published by Cracow University of Economics, Poland), member of editorial boards of several scientific journals, including the bi-annual 'Business Excellence' (published by the University of Zagreb, Croatia), the quarterly 'Studia Negotia' (published by Babes-Balyai University in Cluj Napoca, Romania), the annual ‘Przedsiębiorczość - Edukacja'/'Entrepreneurship - Education’ (published by Pedagogical University of Cracow, Poland); the bi-annual 'Journal of Entrepreneurship, Business and Economics' JEBE (PUblished by Scientificia); the quarterly 'Horyzonty Polityki'/'Horizons of Politics' (published by Jesuit University in Krakow, Poland); .

an OECD national expert for entrepreneurship in the years 2012-2014, participant of various international education and research projects (e.g. Jean Monnet, Atlantis, International Visegrad Fund IVF, Central European Initiative CEI),

author of several books, including monographs and scholarly books, as well as practical guides for entrepreneurs and over 150 articles in the field of entrepreneurship and the internationalization of small and medium-sized enterprises,

Cong Doanh Duong

Assistant Professor at National Economics University in Hanoi (Vietnam). PhD in Business Management (2019, National Economics University, Vietnam & University of Szczecin, Poland-Erasmus Mundus Programme); Master of Science in Management (2016, KEDGE Business School, France); Bachelor of Business Administration (2011, National Economics University, Vietnam). His research interests include entrepreneurship, corporate social responsibility, and green consumption.

Correspondence to: Doanh Duong Cong, PhD, Department of General Management, Faculty of Business Management, National Economics University, 207 Giai Phong, Hanoi, Vietnam, e-mail: doanhdc@neu.edu.vn


References

  1. Acemoglu, D., & Restrepo P. (2019). Artificial Intelligence, Automation, and Work. [In] A. Agrawal, J. Gans & A. Goldfarb (Eds.), The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226613475.003.0008
  2. Adetayo, A.J. (2023). Artificial intelligence chatbots in academic libraries: the rise of ChatGPT. Library Hi Tech News.
  3. Amariles, D.R., & Baquero, P.M. (2023). Promises and limits of law for a human-centric artificial intelligence. Computer Law & Security Review, 48, 1-9.
  4. Androniceanu, A., Georgescu, I., & Sabie, O.-M. (2022). The impact of digitalization on public administration, economic development, and well-being in the EU countries. Central European Public Administration Re-view, 20(1), 7-29.
  5. Appel, G., Neelbauer, J., & Schweidel, D.A. (2023). Generative AI Has an Intellectual Property Problem. Har-vard Business Review. Retrieved from https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem on May 8, 2023.
  6. Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interac-tion at team level: a conceptual assessment of the challenges and potential HRM strategies. Interna-tional Journal of Manpower, 43(1), 75-88.
  7. Atleson, M. (2023). The Luring Test: AI and the engineering of consumer trust. Retrieved from https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust?utm_source=govdelivery on May 8, 2023.
  8. Babbie, E. (2012). The Practice of Social Research. 13th ed., Belmont, CA: Wadsworth Cengage Learning.
  9. Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.
  10. Berg, A., Buffie, E.F., & Zanna, L.F. (2018). Should We Fear the Robot Revolution? (The Correct Answer is Yes) (May 2018). IMF Working Paper No. 18/116. Retrieved from https://ssrn.com/abstract=3221174 on May 7, 2023.
  11. Borji, A. (2023). A Categorical Archive of ChatGPT Failures. arXiv pre-print server. Retrieved from https://www.arxiv-vanity.com/papers/2302.03494/ on May 3, 2023.
  12. Bove, T. (2023). Bill Gates says ChatGPT will ‘change our world’ but it doesn’t mean your job is at risk. Re-trieved from https://fortune.com/2023/02/10/bill-gates-chatgpt-jobs-chatbot-microsoft-google-bard-bing/ on February 24, 2023.
  13. Brod, C. (1984). Technostress: The human cost of the computer revolution: Basic books.
  14. Brumfield, E.J. (2008). Using online tutorials to reduce uncertainty in information seeking behavior. Journal of Library Administration, 48(3-4), 365-377.
  15. Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Dædalus, 151(2), 272-287.
  16. Bughin, J., & van Zeebroeck, N. (2018). 3 'AI divides' and what we can do about them. Word Economic Fo-rum. Retrieved from https://www.weforum.org/agenda/2018/09/the-promise-and-pitfalls-of-ai on May 7, 2023.
  17. Chan, A. (2023). GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspec-tives in AI ethics and industry. AI and Ethics, 3(1), 53-64. https://doi.org/10.1007/s43681-022-00148-6
  18. Chen, Y., Wang, X., Benitez, J., Luo, X., & Li, D. (2022). Does techno-invasion lead to employees’ deviant be-haviors?. Journal of Management Information Systems, 39(2), 454-482.
  19. Chen, Y., & Xu, D. (2018). The Impact of Artificial Intelligence on Employment. Comparative Studies, 2, 135-160. https://doi.org/10.2139/ssrn.3116753
  20. Collis, J., & Hussey, R. (2009). Business Research: A Practical Guide for Undergraduate & Postgraduate Stu-dents. 3rd ed., London: Palgrave Macmillan.
  21. Costello, E. (2023). ChatGPT and the Educational AI Chatter: Full of Bullshit or Trying to Tell Us Something?. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00398-5
  22. Culver, C., Moor, J., Duerfeldt, W., Kapp, M., & Sullivan, M. (1994). Privacy. Professional Ethics, 3(3- 4), 3-25.
  23. Dengler, K., & Matthes, B. (2018). The impacts of digital transformation on the labour market: Substitution potentials of occupations in Germany. Technological Forecasting and Social Change, 137, 304-316. https://doi.org/10.1016/j.techfore.2018.09.024
  24. Dijmărescu, I., Iatagan, M., Hurloiu, I., Geamănu, M., Rusescu, C., & Dijmărescu, A. (2022). Neu-romanagement decision making in facial recognition biometric authentication as a mobile pay-ment technology in retail, restaurant, and hotel business models. Oeconomia Copernicana, 13(1), 225-250. https://doi.org/10.24136/oc.2022.007
  25. Dipankar Das. (2023). Understanding the choice of human resource and the artificial intelligence: “strategic behavior” and the existence of industry equilibrium. Journal of Economic Studies, 50(2), 234-267. https://doi.org/10.1108/JES-06-2021-0305
  26. Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., et al. (2023). So what if ChatGPT wrote it?. Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71.
  27. Efe, A. (2022). The Impact of Artificial Intelligence on Social Problems and Solutions: An Analysis on The Context of Digital Divide and Exploitation. Yeni Medya, (13), 247-264. https://doi.org/10.55609/yenimedya.1146586
  28. Ekanayake, J., & Saputhanthri, L. (2020). E-Agro: Intelligent chat-bot. IoT and artificial intelligence to en-hance farming industry. Agris on-Line Papers in Economics and Informatics, 12(1), 15-21. https://doi.org/10.7160/aol.2020.120102
  29. Eke, D.O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity?. Journal of Responsible Technology, 13.
  30. Eliot, L. (2023). Generative AI ChatGPT As Masterful Manipulator Of Humans, Worrying AI Ethics And AI Law. Forbes. Retrieved from https://www.forbes.com/sites/lanceeliot/2023/03/01/generative-ai-chatgpt-as-masterful-manipulator-of-humans-worrying-ai-ethics-and-ai-law/?sh=c5836371d669 on May 3, 2023.
  31. European Parliament and Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from https://eur-lex.europa.eu on May 3, 2023.
  32. Farrokhnia, M., Banihashem, S.K., Noroozi O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International. https://doi.org/10.1080/14703297.2023.2195846
  33. Fischer, K.W., Yan, Z., & Stewart, J. (2003). Adult cognitive development: Dynamics in the developmental web. In J. Valsiner & K. Connolly (Eds.), Handbook of developmental psychology (pp. 491-5160.) Sage. https://doi.org/10.4135/9781848608306.n2
  34. Fisher, C. et al. (2010). Researching and Writing a Dissertation. 3rd edition. Harlow: Prentice Hall.
  35. Florek-Paszkowska, A., Ujwary-Gil, A., & Godlewska-Dzioboń, B. (2021). Business innovation and critical suc-cess factors in the era of digital transformation and turbulent times. Journal of Entrepreneurship, Man-agement, and Innovation, 17(4), 7-28. https://doi.org/10.7341/20211741
  36. Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1), https://doi.org/10.1162/99608f92.8cd550d1
  37. Freelon, D., Bossetta, M., Wells, C., Lukito, J., Xia, Y., & Adams, K. (2022). Black Trolls Matter: Racial and Ideo-logical Asymmetries in Social Media Disinformation. Social Science Computer Review, 40(3), 560-578. https://doi.org/10.1177/0894439320914853
  38. Fu, Y.-G., Fang, G.-C., Liu, Y.-Y., Guo, L.-K., & Wang, Y.-M. (2023). Disjunctive belief rule-based reasoning for decision making with incomplete information. Information Sciences, 625, 49-64.
  39. Fuchs, Ch., & Trottier, D. (2015). Towards a theoretical model of social media surveillance in contemporary society. Communications, 40(1), 113-135.
  40. Garibay, O.O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J.C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., McGregor, S., Salvendy, G., Shneiderman, B., Stephanidis, C., Strobel, Ch., Ten Holter, C., & Xu, W. (2023). Six Human-Centered Artificial Intelligence Grand Challeng-es. International Journal of Human–Computer Interaction, 39(3), 391-437, https://doi.org/10.1080/10447318.2022.2153320
  41. GDPR (2023). General Data Protection Regulation, art. 4 GDPR Definitions. Retrieved from https://gdpr-info.eu/art-4-gdpr/ on March 29, 2023.
  42. Georgieff, A., & Hyee, R. (2022). Artificial intelligence and employment: New cross-country evidence. Fron-tiers in Artificial Intelligence, 5, 832736. https://doi.org/10.3389/frai.2022.832736
  43. Getahun, H. (2023). ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias. Insider. Retrieved from https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1 on March 29, 2023.
  44. Gladden, M., Fortuna, P., & Modliński, A. (2022). The Empowerment of Artificial Intelligence in Post-Digital Organizations: Exploring Human Interactions with Supervisory AI. Human Technology, 18(2), 98-121. https://doi.org/10.14254/1795-6889.2022.18-2.2
  45. Green, A., & Lamby, L. (2023). The supply, demand and characteristics of the AI workforce across OECD countries. OECD Social, Employment and Migration Working Papers No. 287, OECD Publishing, Paris. https://doi.org/10.1787/bb17314a-en
  46. Gruetzemacher, R., Paradice, D., & Lee, K.B. (2020). Forecasting extreme labor displacement: A survey of AI practitioners. Technological Forecasting and Social Change, 161.
  47. Hang, Y., Hussain, G., Amin, A., & Abdullah, M.I. (2022). The moderating effects of technostress inhibitors on techno-stressors and employee's well-being. Frontiers in Psychology, 12, 6386.
  48. Hart, J., Noack, M., Plaimauer, C., & Bjørnåvold, J. (2021). Towards a structured and consistent terminology on transversal skills and competences. Brüssel: Europäische Kommission und Cedefop. Towards a struc-tured and consistent terminology on transversal skills and competences Esco (europa. eu). Retrieved from https://esco.ec.europa.eu/uk/publication/towards-structured-and-consistent-terminology-transversal-skills-and-competences on May 7, 2023.
  49. Hsu, J. (2022). Europe's AI regulations could lead the way for the world. New Scientist, 256(3419).
  50. Huang, M.H., & Rust, R.T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172. https://doi.org/10.1177/109467051775245
  51. Huang, M.H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 43-65. https://doi.org/10.1177/0008125619863436
  52. Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201-210. https://doi.org/10.1111/beer.12479
  53. Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance, Government Information Quarterly, 33(3) 371-377. https://doi.org/10.1016/j.giq.2016.08.011
  54. Janssen, M., Brous, P., Estevez, E., Barbosa, L.S., & Jankowski, T. (2020). Data governance: Organ-izing data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3), https://doi.org/10.1016/j.giq.2020.101493
  55. Jones-Jang, S.M., Mortensen, T., & Liu, J. (2021). Does Media Literacy Help Identification of Fake News? In-formation Literacy Helps, but Other Literacies Don’t. American Behavioral Scientist, 65(2), 371-388. https://doi.org/10.1177/0002764219869406
  56. Kandoth, S., & Kushe Shekhar, S. (2022). Social influence and intention to use AI: the role of personal inno-vativeness and perceived trust using the parallel mediation model. Forum Scientiae Oeco-nomia, 10(3), 131-150. https://doi.org/10.23762/FSO_VOL10_NO3_7
  57. Kaplan, A.M., & Haenlein, M. (2021). Siri, Siri, in my hand: Who's the fairest in the land? On the interpreta-tions, illustrations, and implications of artificial intelligence. Business Horizons, 64(2), 219-228. https://doi.org/10.1016/j.bushor.2018.08.004
  58. Kedziora, D. (2022). Botsourcing, Roboshoring or Virtual Backoffice? Perspectives on Implementing Robotic Process Automation (RPA) and Artificial Intelligence (AI). Human Technology, 18(2), 92-97. https://doi.org/10.14254/1795-6889.2022.18-2.1
  59. Kelley, S. (2022). Employee perceptions of the effective adoption of AI principles. Journal of Business Ethics, 178(4), 871-893.
  60. Khogali, H., & Mekid, S. (2023). The Blended Future of Automation and Ai: Examining Some Long-Term Soci-etal Impact Features. Technology in Society, 73, https://doi.org/10.1016/j.techsoc.2023.102232
  61. Khogali, H.O., & Mekid, S. (2023). The blended future of automation and AI: Examining some long-term soci-etal and ethical impact features. Technology in Society, 73, 102232. https://doi.org/10.1016/j.techsoc.2023.102232
  62. Kietzmann, J., Lee, L.W., McCarthy, I.P., & Kietzmann, T. (2020). Deepfakes: Trick or treat?. Business Horizons, 63(2), 135-146. https://doi.org/10.1016/j.bushor.2019.11.006
  63. Kitsara, I. (2022). Artificial Intelligence and the Digital Divide: From an Innovation Perspective. In A. Bounfour (Ed.) Platforms and Artificial Intelligence. Progress in IS (pp. 245-265). Springer, Cham. https://doi.org/10.1007/978-3-030-90192-9_12
  64. Königstorfer, F., & Thalmann, S. (2022). AI Documentation: A path to accountability. Journal of Responsible Technology, 11.
  65. Kopalle, P.K., Gangwar, M., Kaplan, A., Ramachandran, D., Reinartz, W., & Rindfleisch, A. (2022). Examining artificial intelligence (AI) technologies in marketing via a global lens: Current trends and future research opportunities. International Journal of Research in Marketing, 39(2), 522-40.
  66. Korzynski, P., Kozminski, A.K., & Baczynska, A.K. (2023). Navigating leadership challenges with technology: Uncovering the potential of ChatGPT, virtual reality, human capital management systems, robotic pro-cess automation, and social media. International Entrepreneurship Review, 9(2). (ahead-of-print)
  67. Korzynski, P., Mazurek, G., Altmann, A., Ejdys, J., Kazlauskaite, R., Paliszkiewicz, J., Wach, K., & Ziemba, E. (2023). Generative artificial intelligence as a new context for management theories: analysis of ChatGPT. Central European Management Journal, 31(1), https://doi.org/10.1108/CEMJ-02-2023-0091
  68. Korzynski, P., Rook, C., Florent Treacy, E., & Kets de Vries, M. (2021). The impact of self-esteem, conscien-tiousness and pseudo-personality on technostress. Internet Research, 31(1), 59-79.
  69. Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions. Philosophy & Technology, 35(1), 17. https://doi.org/10.1007/s13347-022-00511-9
  70. Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569. https://doi.org/10.1038/s41598-023-31341-0
  71. Lane, M., & A. Saint-Martin (2021). The impact of Artificial Intelligence on the labour market: What do we know so far?. OECD Social, Employment and Migration Working Papers, No. 256, OECD Publishing, Paris. https://doi.org/10.1787/7c895724-en
  72. Lassébie, J., & Quintini G. (2022). What skills and abilities can automation technologies replicate and what does it mean for workers? New Evidence. OECD Social, Employment and Migration Working Papers No 282, OECD Publishing, Paris. https://doi.org/10.1787/646aad77-en
  73. Lăzăroiu, G., Androniceanu, A., Grecu, I., Grecu, G., & Negurita, O. (2022). Artificial intelligence-based deci-sion-making algorithms, Internet of Things sensing networks, and sustainable cyber-physical manage-ment systems in big data-driven cognitive manufacturing. Oeconomia Copernicana, 13(4), 1047-1080. https://doi.org/10.24136/oc.2022.030
  74. Leinen, P., Esders, M., Schütt, K.T., Wagner, C., Müller, K.R., & Tautz, F.S. (2020). Autonomous robotic nanofabrication with reinforcement learning. Science Advances, 36(6), eabb6987. https://doi.org/10.1126/sciadv.abb698
  75. Li, L., & Wang, X. (2021). Technostress inhibitors and creators and their impacts on university teachers’ work performance in higher education. Cognition, Technology & Work, 23, 315-330.
  76. Li, S., Wang, H., & Wang, S. (2021). Research on the factors that influence the labor structure of the manu-facturing industry in the context of Artificial Intelligence. Management Review, 33(3), 307-314. Retrieved from http://journal05.magtech.org.cn/jweb_glpl/EN/Y2021/V33/I3/307 on May 5, 2023.
  77. Li, X., & Liao, H. (2023). A large-scale group decision making method based on spatial information aggrega-tion and empathetic relationships of experts. Information Sciences, 632, 503-15.
  78. Li, X., & Liao, H. (2023). A large-scale group decision making method based on spatial information aggrega-tion and empathetic relationships of experts. Information Sciences, 632, 503-515. https://doi.org/10.1016/j.ins.2023.03.051
  79. Lutz, C. (2019). Digital inequalities in the age of artificial intelligence and big data. Human Behaviour and Emerging Techgnologies, 1(2), 141-148. https://doi.org/10.1002/hbe2.140
  80. Małkowska, A., Urbaniec, M., & Kosała, M. (2021). The impact of digital transformation on Europe-an coun-tries: insights from a comparative analysis. Equilibrium. Quarterly Journal of Economics and Economic Pol-icy, 16(2), 325-355. https://doi.org/10.24136/eq.2021.012
  81. Manyika, J., Silberg, J., & Presten, B. (2019). What Do We Do About the Biases in AI?. Harvard Business Re-view, October 25. Retrieved from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai on April 23, 2023.
  82. Maslej N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J-C., Parli, V., Yoav Shoham, Wald, R., Clark, J., Perrault, R. (2023). Annual Report, AI Index Steering Commit-tee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
  83. Mazurek, G. (2023). Artificial Intelligence, Law, and Ethics. Krytyka Prawa, 15(1), 11-14. https://doi.org/10.7206/kp.2080-1084.568
  84. Mazurek, G. (2023). Artificial Intelligence, Law, and Ethics. Krytyka Prawa, 15(1), 11-14.
  85. Mazurek, G., & Małagocka, K. (2019). Perception of privacy and data protection in the context of the devel-opment of artificial intelligence. Journal of Management Analytics, 6(4), 344-364. https://doi.org/10.1080/23270012.2019.1671243
  86. McKinsey & Company Survey (2022). The state of AI in 2022 – and a half of decade in review. Retrieved from https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review on 28 March 2023.
  87. Moor, J. (1997). Towards a theory of privacy in the information age. Computers and Society, 27(3), 27-32.
  88. Morandini, S., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artifi-cial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The Inter-national Journal of an Emerging Transdiscipline, 26, 39-68. https://doi.org/10.28945/507
  89. Moravec, P.L., Kim, A., & Dennis A.R. (2020). Appealing to Sense and Sensibility: System 1 and System 2 In-terventions for Fake News on Social Media. Information Systems Research, 31(3), 987-1006. https://doi.org/10.1287/isre.2020.0927
  90. Nabawanuka, H., & Ekmekcioglu, E.B. (2022). Millennials in the workplace: perceived supervisor support, work–life balance and employee well–being. Industrial and Commercial Training, 54(1), 123-144.
  91. Nasir, J.A., Khan, O.S., & Varlamis, I. (2021). Fake news detection: A hybrid CNN-RNN based deep learning approach. International Journal of Information Management Data Insights, 1(1). https://doi.org/10.1016/j.jjimei.2020.100007
  92. Newman, J., Mintrom, M., & O'Neill, D. (2022). Digital technologies, artificial intelligence, and bureaucratic transformation. Futures, 136, 102886. https://doi.org/10.1016/j.futures.2021.102886
  93. Norori, N., Hu, Q., Aellen, F.M., Faraci, F.D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y), 2(10), 100347.
  94. Oduro, S., Moss, E., & Metcalf, J. (2022). Obligations to assess: Recent trends in AI accountability regulations. Patterns (N Y), 3(11), 100608.
  95. Oliinyk, O., Bilan, Y., & Mishchuk, H. (2021). Knowledge Management and Economic Growth: The Assessment of Links and Determinants of Regulation. Central European Management Journal, 29(3), 20-39. https://doi.org/10.7206/cemj.2658-0845.52
  96. Olinga, L. (2022). Elon Musk sounds the alarm about ChatGPT. Retrieved from https://www.thestreet.com/technology/elon-musk-sounds-the-alarm-about-chatgpt on February 17, 2023.
  97. Oliveira, A., & Braga, H. (2020). Artificial intelligence: learning and limitations, WSEAS Transanctions on Advances Engineering Education, 17, 80-86, https://doi.org/10.37394/ 232010.2020.17.10
  98. Pahl, S. (2023). An emerging divide: Who is benefiting from AI?. Retrieved from https://iap.unido.org/articles/emerging-divide-who-benefiting-ai#fn-2303-0 on May 3, 2023.
  99. Palladino, N. (2022). A ‘biased’ emerging governance regime for artificial intelligence? How AI ethics get skewed moving from principles to practices. Telecommunications Policy.
  100. Pan, Y., & Froese F.J. (2023). An interdisciplinary review of AI and HRM: Challenges and future directions. Human Resource Management Review, 33(1). https://doi.org/10.1016/j.hrmr.2022.100924
  101. Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Manage-ment Review, 33(1), https://doi.org/10.1016/j.hrmr.2021.100857
  102. Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing. https://doi.org/10.1016/j.ijresmar.2023.03.001
  103. Piotrowski, D. (2022). Consumer perceived ethicality of banks in the era of digitalisation: The case of Poland. Economics and Business Review, 22(1), 90-114. https://doi.org/10.18559/ebr.2022.1.6
  104. Piotrowski, D. (2023). Privacy frontiers in customers’ relations with banks. Economics and Business Review EBR, 23(1), 119-141. https://doi.org/10.18559/ebr.2023.1.5
  105. Politou, E., Alepis, E., & Patsakis, C. (2018). Forgetting personal data and revoking consent under GDPR: Challenges and proposed solutions. Journal of Cybersecurity, 4(1), 1-20.
  106. Puzzo, G., Fraboni, F., & Pietrantoni, L. (2020). Artificial intelligence and professional transformation: Re-search questions in work psychology. Rivista Italiana di Ergonomia, 21, Human-Centered AI, 43. http://www.societadiergonomia.it/wp-content/uploads/2014/07/rivista-n.21-corr.pdf#page=61
  107. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
  108. Ragu-Nathan, T., Tarafdar, M., Ragu-Nathan, B.S., & Tu, Q. (2008). The consequences of technostress for end users in organizations: Conceptual development and empirical validation. Information Systems Re-search, 19(4), 417-433.
  109. Ramos, R., Ferrittu, G., & Goulart, P. (2023). Technological change and the future of work. In Global Labour in Distress, Volume I: Globalization, Technology and Labour Resilience (pp. 203-212): Springer.
  110. Rana, N.P., Chatterjee, S., Dwivedi, Y.K., & Akter, S. (2022). Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Europe-an Journal of Information Systems, 31(3), 364-387. https://doi.org/10.1080/0960085X.2021.1955628
  111. Rana, N.P., Chatterjee, S., Dwivedi, Y.K., & Akter, S. (2022). Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Europe-an Journal of Information Systems, 31(3), 364-387. https://doi.org/10.1080/0960085X.2021.1955628
  112. Ratten, V. (2023). Research Methodologies for Business Management. London: Routledge
  113. Regona, M., Yigitcanlar, T., Xia, B., & Li, R.Y.M. (2022). Opportunities and Adoption Challenges of AI in the Construction Industry: A PRISMA Review. Journal of Open Innovation: Technology, Market, and Complexi-ty, 8(1).
  114. Reshetnikova, M.S., & Mikhaylov, I.A. (2023). Artificial Intelligence Development: Implications for China. Montenegrin Journal of Economics, 19(1), 139-152.
  115. Robertson, A. (2023). ChatGPT returns to Italy after ban. The Verge, 28 April 2023.
  116. Sayed, E., Yasin, A., Elsayed, A., Ezzat, H., & Elsayed, O. (2022). Man vs. machine: exploring the impact of artificial intelligence adoption on employees' service quality. Journal of the Faculty of Tourism and Ho-tels-University of Sadat City, 6(1/2).
  117. Short, C.E., & Short, J.C. (2023). The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. Journal of Business Venturing Insights, 19.
  118. Slapeta, J. (2023). Are ChatGPT and other pretrained language models good parasitologists?. Trends Parasi-tol.
  119. Smits, J., & Borghuis, T. (2022). Generative AI and Intellectual Property Rights. In B. Custers & E. Fosch-Villaronga (Eds.), Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice (pp. 323-344). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-523-2_17
  120. Stahl, B.C. (2021). Artificial intelligence for a better future: an ecosystem perspective on the ethics of AI and emerging digital technologies. Springer Nature.
  121. Sullivan, Y.W., & Fosso Wamba, S. (2022). Moral Judgments in the Age of Artificial Intelligence. Journal of Business Ethics, 178(4), 917-943. https://doi.org/10.1007/s10551-022-05053-w
  122. Tarafdar, M., Tu, Q., Ragu-Nathan, B.S., & Ragu-Nathan, T. (2007). The impact of technostress on role stress and productivity. Journal of Management Information Systems, 24(1), 301-328.
  123. Teubner, T., Flath, C.M., Weinhardt, C., van der Aalst, W., & Hinz, O. (2023). Welcome to the Era of ChatGPT et al. Business & Information Systems Engineering, 65(2), 95-101.
  124. Thorp, H.H. (2023). ChatGPT is fun, but not an author (Editorial). Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879
  125. Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Envi-ronments, 10(1).
  126. UNCDP. (2018). Excerpt from Committee for Development Policy, Report on the twentieth, See Official Rec-ords of the Economic and Social Council, 2018, Supplement No. 13 (E/2018/33). Retrieved from https://sustainabledevelopment.un.org/content/documents/2754713_July_PM_2._Leaving_no_one_behind_Summary_from_UN_Committee_for_Development_Policy.pdf on May 2, 2023.
  127. Vaassen, B. (2022). AI, Opacity, and Personal Autonomy. Philosophy & Technology, 35(4), 88. https://doi.org/10.1007/s13347-022-00577-5
  128. van Dis, E.A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C.L. (2023). ChatGPT: Five priorities for re-search. Nature, 614(7947), 224-226. https://doi.org/10.1038/d41586-023-00288-7
  129. Varsha, P.S. (2023). How can we manage biases in artificial intelligence systems – A systematic literature review. International Journal of Information Management Data Insights, 3(1).
  130. Verma, R., Shen, J., Benedict, B.C., Murray-Tuite, P., Lee, S., Gurt' Ge, Y., & Ukkusuri, S.V. (2022). Progression of hurricane evacuation-related dynamic decision-making with information processing. Transportation Research Part D: Transport and Environment, 108.
  131. Vorobeva, D., El Fassi, Y., Costa Pinto, D., Hildebrand, D., Herter, M.M., & Mattila, A.S. (2022). Thinking skills don’t protect service workers from replacement by artificial intelligence. Journal of Service Research, 25(4), 601-613. https://doi.org/1177/10946705221104312
  132. Wang, Q., & Zhao, G. (2023). Exploring the influence of technostress creators on in-service teachers' atti-tudes toward ICT and ICT adoption intentions. British Journal of Educational Technology, n/a(n/a). https://doi.org/10.1111/bjet.13315
  133. Weil, M.M., & Rosen, L.D. (1997). Technostress: Coping with technology@ work@ home@ play. (13), J. Wiley New York.
  134. West D.M. (2020). Machines, Artificial Intelligence, & and the Workforce: Recovering & Readying Our Econ-omy for the Future, September 10, 2020, Brookings Institution Washington, D.C.
  135. Willcocks, L. (2020). Robo-apocalypse cancelled? Reframing the automation and future of work debate. Journal of Information Technology, 35(4), 286-302. https://doi.org/10.1177/0268396220925830
  136. World Economic Forum. (2019). World Economic Forum Annual Meeting [Conference session]. Retrieved from https://www.weforum.org/events/world-economic-forum-annual-meeting-202 on May 22, 2023.
  137. Wu, J., Guo, S., Zhang, W., Shin, D., & Song, M. (2022). Techno-invasion and job satisfaction in China: The roles of boundary preference for segmentation and marital status. Human Systems Management, 41, 655-670. https://doi.org/10.3233/HSM-211595
  138. Xu, G., Xue, M., & Zhao, J. (2023). The Relationship of Artificial Intelligence Opportunity Perception and Em-ployee Workplace Well-Being: A Moderated Mediation Model. International Journal of Environmental Research and Public Health. 20, https://doi.org/10.3390/ ijerph20031974
  139. Zajko, M. (2022). Artificial intelligence, algorithms, and social inequality: Sociological contributions to con-temporary debates. Sociology Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962
  140. Zhang, Z., Ye, B., Qiu, Z., Zhang, H., & Yu, C. (2022). Does Technostress Increase R&D Employees' Knowledge Hiding in the Digital Era?. Frontiers in Psychology, 13.

Downloads

Download data is not yet available.

Most read articles by the same author(s)

1 2 3 > >> 

Similar Articles

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 > >> 

You may also start an advanced similarity search for this article.