Next Article in Journal
Factors Influencing Tourists’ Intention and Behavior toward Tourism Waste Classification: A Case Study of the West Lake Scenic Spot in Hangzhou, China
Previous Article in Journal
Modeling the Effects of Climate Change on the Current and Future Potential Distribution of Berberis vulgaris L. with Machine Learning
Previous Article in Special Issue
Strategic CSR: Framework for Sustainability through Management Systems Standards—Implementing and Disclosing Sustainable Development Goals and Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond the Business Case for Responsible Artificial Intelligence: Strategic CSR in Light of Digital Washing and the Moral Human Argument

by
Rosa Fioravante
1,2
1
DESP, Department of Economy, Society and Politics, University of Urbino Carlo Bo, 61029 Urbino, Italy
2
CLSBE—Catholic Lisbon School of Business and Economics, Palma de Cima, 1649-023 Lisbon, Portugal
Sustainability 2024, 16(3), 1232; https://doi.org/10.3390/su16031232
Submission received: 18 December 2023 / Revised: 22 January 2024 / Accepted: 26 January 2024 / Published: 1 February 2024

Abstract

:
This paper, normative in nature and scope, addresses the perks and limits of the strategic CSR approach when confronted with current debates on the ethics of artificial intelligence, responsible artificial intelligence, and sustainable technology in business organizations. The paper summarizes the classic arguments underpinning the “business case” for the social responsibility of businesses and the main moral arguments for responsible and sustainable behavior in light of recent technological ethical challenges. Both streams are confronted with organizational ethical dilemmas arising in designing and deploying artificial intelligence, yielding tensions between social and economic goals. While recognizing the effectiveness of the business argument for responsible behavior in artificial intelligence, the paper addresses some of its main limits, particularly in light of the “digital washing” phenomenon. Exemplary cases of digital washing and corporate inconsistencies here discussed are taken from the literature on the topic and re-assessed in light of the proposed normative approach. Hence, the paper proposes to overcome some limits of the business case for CSR applied to AI, which mainly focuses on compliance and reputational risks and seeks returns in digital washing, by highlighting the normative arguments supporting a moral case for strategic CSR in AI. This work contributes to the literature on business ethics and strategic CSR at its intertwining with the ethics of AI by proposing a normative point of view on how to deploy the moral case in organizations when dealing with AI-related ethical dilemmas. It does so by critically reviewing the state-of-the-art studies on the debate, which, so far, contain different streams of research, and adding to such a body of literature what is here identified and labeled as the “human argument”.

1. Introduction

Ethical issues arising from artificial intelligence (AI) design and deployment in business organizations are increasingly debated. Undesirable social outcomes entailed by AI are shaping scholars and practitioners’ concerns about the future of protecting human rights, ensuring environmental sustainability, facing workforce technological unemployment and reskilling, dealing with racial and gender discriminations, focusing on privacy and data control, among others. Artificial intelligence is here defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (OECD 2019 [1], p. 7). AI can be further understood as a collection of technologies with the distinctive “ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” [2].
In light of such disruption, corporate social responsibility has become critical to ensure companies’ stance on ethical AI, both in the forms of taming and preventing undesirable consequences and supporting AI for positive social and environmental development for internal and external stakeholders [3]. Corporations are exposed to a number of unprecedented ethical dilemmas when having to perform decision making over trade-offs between profit maximization through AI and ethical AI-driven practices [4]. Tensions arising from such dilemmas can be solved through the adoption of “washing” practices (also namely ethical digital washing, blue washing, ethics AI washing, machine washing): “tech giants are working hard to assure us of their good intentions surrounding AI. But some of their public relations campaigns are creating the surface illusion of positive change without the verifiable reality. Call it “machinewashing” [5]. Such a use of washing through symbolic actions and misleading claims has been discussed as a phenomenon not only pertaining to the domain of communication strategy but also extensively involving the moral stance of a CSR strategy [6], especially when confronting the field of ethical AI, where doing nothing (or only pretending to act) is already considered a source of potential harm due to underestimation of future shortcomings.
Strategic CSR, aiming to extensively adopt ethical behavior in corporate practices and to create at the same time social and economic prosperity [7], has yet to be confronted with the need to provide additional weight on the moral arguments to support corporate involvement in AI ethical responsibility. Although the relationship between CSR and corporate performance is a troubled one, with scholars struggling to find unquestionable positive correlation between the two, CSR has indeed been proven beneficial for the bottom line in a vast number of cases [8,9,10]. Major theories, such as CSV [11], have been building on the so-called “business case” for creating economic value while exploiting the chance to provide a social positive impact or solve a social issue yet untacked by the market [9]. Nonetheless, criticism towards such an approach has underlined how tensions between socially oriented and economic-based motives might arise, with significant consequences on organizational survival and success as well as ethically desirable outcomes [12]. Although Porter and Kramer already answered such criticisms to their CSV perspective by highlighting how the use of the business case is essential to draw attention and elicit social positive impacts, dilemmas arising in AI ethics present a degree of complexity [13] entailing that it would be highly difficult to come up with optimal economic and socially oriented solutions easily compatible with CSV strategies.
Such dilemmas are quite common within a widening grey area, where legislation is still lacking and public policies are falling behind the fast pace of technological advancements [14], leaving increased space for corporations to play a critical role in protecting and enhancing human rights and human development [15] (Scherer et al., 2016) or increasing the chances of AI’s harmful consequences through irresponsible behavior [16]. Grey areas concern both businesses dealing with AI developing, confronted with the problem of designing responsible AI and ethical machines (see, for example, the value-alignment problem [17]), as well as businesses purchasing new technologies and using them to increase their efficiency and performance. In the latter case, with businesses struggling to not fall behind in such a technological race with innovative competitors, ethical concerns posed by the use of AI might easily be underestimated in favor of time and resource-saving approaches [18].
While some scholars have argued that corporate interests in ethical AI stems from the concern that failing to operationalize ethical AI will negatively impact economic performance [19], others have sharply criticized current trends of ethical involvement by corporations—especially big tech players—by labeling it “ethical digital washing” [20]. To navigate such a debate, going back to arguments concerning the possible (or not) existence of a “market for virtue” [21] is therefore urgent to assess motives and drivers of responsible AI behavior in business organizations.
This work aims at offering a critical review of supporting the “business case” and “moral case” for strategic CSR in light of artificial intelligence-related ethical dilemmas in organizational everyday life. It seeks to contribute to the subfield of strategic CSR and AI [3,22] by adding to it the need for a “moral argument” to support ethical behavior when confronting AI widespread ethical challenges.
Indeed, the paper enhances our knowledge on debates spurring from the overlapping of CSR and ethics of AI in at least three ways: first, by refashioning the classical debate on business arguments and moral arguments supporting CSR in light of current challenges for the management of AI [23,24]. Second, by looking into already circulated cases of digital washing and corporate inconsistencies, offering their discussion through business ethics’ lenses point at washing practices as sources of underestimation of dehumanization perils entailed by AI [25,26]. Third, by presenting a “human argument” able to overcome current limits of the classical “business argument” and already circulated “moral argument” in order to support integral ethical commitment to AI designs and deployment for social purposes [27,28].
The purpose of this paper is to contribute to the normative literature in the field of business ethics and sustainability [29,30], conceived as a body of knowledge of cornerstone importance to inform applied ethics in business and administration practices [31] as well as to inform future empirical research [32]. More in detail, the paper follows the “symbiotic approach” of business ethics’ normative/empirical split [32,33]. Such an approach entails a practical relationship between the two streams by investing normative research (to which body this work pertains) of morally enrooted debates on business organizations and practices, while empirical research is invested in enquiries adding to and further detailing morally debated problems by normative works. The latter approach is distinct from “theoretical integration”, where normative concepts can be directly drawn from empirical studies [32]. To form the underpinning arguments to this work, the author has therefore drawn the major discussed concepts from major normative studies in the field (for example, the business case vs. the moral case for CSR are highlighted in cornerstone textbooks of the discipline [34]). Therefore, the following paragraphs are each dedicated to themes derived from an extensive literature review conducted by summarizing and critically assessing the inherent body of normative production. The descriptive use of cases, reported and commented, aims to offer relevant and real-life examples where the normative arguments illustrated can be applied [35]. This work seeks to add to the literature adopting a “foundational normative method”: analyzing business problems by highlighting their ethical enrooting in ethical theories (or religious theories), referring to such theories to discuss inherent motivations and outputs of decision making, evaluating their relevance for the theory [36]. This work does so with a specific focus on the intersection of the ethics of AI and strategic CSR ethical underpinnings.
The remainder of the paper, structured by devoting each section to summarizing a major debate in the field and one subsequent section on its refashioning in light of AI developments, is organized as follows: in Section 2, the paper presents the main arguments supporting the business case for strategic CSR while, in Section 3, highlighting its main perks and limits when confronted with AI developments and corresponding ethical downfalls. Section 4 presents the main arguments for supporting the moral case underpinning CSR strategies, and Section 5 discusses such arguments in light of AI ethical disputes. Section 6 presents the washing strategy and related tensions arising from strategic CSR adoption, while Section 7 discusses digital ethical washing in corporations addressing AI design and how to counterbalance such a tendency by leveraging on the “human argument”.

2. Strategic CSR and the Business Case

The link between corporate social responsibility (CSR) and strategy and competitive advantage is a classic exploration within CSR literature and business ethics debates [37]. Most of such a debate has reached a consensus around the paradigm of “Strategic CSR”. Camilleri and Sheehy [7] define the latter as the integration of responsible practices with corporate practices, thus strategically deploying CSR as source of multiple targeted positive outcomes, such as competitive advantage, long-term economic sustainability, discouraged additional regulation, first mover advantage in setting high ethical standards [38]. In all its accounts, strategic CSR has the ultimate aim to create economic and social value at the same time [39]. Central to such a view is the understanding of practices, policies, and processes aimed at addressing social responsibility as tied to its impact on corporate performance. As Vishwanathan et al. [40] have underlined, among others, four variables of the positive impact of CSR on CSP are as follows: “(1) enhancing firm reputation, (2) increasing stakeholder reciprocation, (3) mitigating firm risk, and (4) strengthening innovation capacity”. Other mediating mechanisms have been identified in strategic planning and targeting positive outcomes for the hosting country in MNCs [41], the guaranteed efficiency of CSR activities through bettering stakeholders’ relationships, venturing new opportunities, focusing on brand equity and leveraging on communication [42], substantive self-regulatory codes of conduct [43], enhanced firm social perception—through psychological factors as warmth and competence—of CSR [44]. Responsible features integrated within corporate governance have further been identified as a mediating factor in CSR investments’ positive impact on corporate performance [45]. Strategic CSR has been discussed as yielding positive outcomes on employees’ commitment [46], legitimacy in potentially adverse contexts [47], as viable support for organizations to have success in sustainability and to thrive. Nonetheless, an unequivocal causal relationship between strategic CSR and positive corporate performance remains empirically contested [48,49], although several studies confirm that cases of positive correlation are not rare and, especially given some conditions, can lead to profit maximization. This is particularly true for long-term orientation strategies [50]. For instance, in cases of advertising or leveraging on the willingness to pay premium prices for embedded products with social value, this is shown in consumers’ appreciation of congruence between goals, actions, audiences, and the firm’s proposed strategy [10,51,52].
Such an extensive body of empirical literature relies on what has been summarized as “the business case” for strategic CSR. This stream of literature is closely tied to the classic debate on the business case for the very foundational models of CSR of any kind, which compels a variety of approaches, mostly grouped as follows: “(1) cost and risk reduction; (2) gaining competitive advantage; (3) developing reputation and legitimacy; and (4) seeking win–win outcomes through synergistic value creation” [53]. The business case has been at the core of the now widespread approach of creating shared value proposed by Porter and Kramer [11] and further popularized with the well-known motto of “doing good and doing well” [54]. Such a paradigm has clearly focused on the existing nexus between business and society as critical for strategic decision making. Nonetheless, as Carroll and Shabana discuss, when arguing for strategic investing in CSR, there is no need to prove a direct relationship between CSR and performance, as such a link can be indirect, or if not clearly identified, other variables connected to responsible behavior might be beneficial to firm survival, change, and sustainability. Together with external pressures, such as reputation, customers’ demands, and regulatory frameworks, the business case has been indeed identified as one of the principal drivers of responsible management [55]. This is true whether responsible strategies are adopted as a reaction to pressures as well as to keep ahead of potential future risks, regulations, backlashes [56]. The business case has become so central that even critics have admitted its uncontestable relevance: “while profitability may not be the only reason corporations will or should behave virtuously, it has become the most influential.” [21].

3. The Business Case for AI in Strategic CSR

Thanks to its focus on long-term orientation, by foreseeing future challenges and preparing in advance for possible business’ shortcomings in ethical and juridical compliance, the motives leading to strategic CSR orientation have acquired new urgency in light of AI’s recent disruptive evolution. Being one of the major projected sources of increased business profitability and, at the same time, yielding major social impacts both in terms of positive and negative outcomes, responsible AI use is a field increasing in interest for CSR scholars and experts [14,57].
By focusing on corporate strategy towards AI as a driving (or impeding) competitive advantage, was a way to enable a better performance, and as a means to reach new goals, the literature has focused on crucial aspects of novel strategic decision making, reforms in organizational culture, alignment of innovation capacity with newly set goals [58,59]. As the literature provides clear indications concerning AI’s potential to increase productivity, free humans from repetitive and unsatisfying tasks, and unleash the offering of new services and products, the same optimistic literature on the future of AI warns about the need to harness such potentiality with careful ethical evaluations and practices [60]. Leveraging between such a potential and its related ethical risks concerns virtually every aspect of organizational life: building trust and confidence in employees towards innovation challenges, enhancing trust from the consumers, keeping the pace with solicited upskilling and reskilling of the workforce, promoting everchanging ways of monitoring competitors’ technological advancements, designing systems of control both over machines’ decision making, and concerns over the adequate fast-paced growing need for compliance with public legislation [61].
The extent and the features of unprecedented time and energy consumption required for organizations to focus on AI as a driver of competitive advantage can easily lead to underestimated or postponed guard-railing ethical downfalls, especially in a time of yet unestablished common paths and clear guidelines [62]. While the grey area between ethical conduct and legal/illegal obligations is expanding, due to AI advancing at a faster pace than law makers and policy makers’ interventions [63], business organizations’ role in preventing, taming, and addressing AI social and environmental impact is growing in centrality [64].
While organizations are generally embracing AI-related CSR strategies proactively [65], voluntary engagement in the effective taming of AI’s social impact is still in its infancy. Nonetheless, the business case for investing in CSR linked to AI has already gained popularity; by focusing on maximizing AI’s business potentiality while minimizing its threats, organizations are aiming at exploiting social disruption brought by AI for business opportunities [3,66]. Nonetheless, the ethical consequences of the future steps of AI foresee a much harder challenge than trying to align business and ethical considerations in the short term due to AI’s forecasted ability (although still debated) to surpass human work capabilities in many sectors [3]. In order to meet such a challenge, going broadly from ethical risk-management strategies to engaging employees in being proactive in embracing change and operationalizing ethical thinking throughout all the organization by raising awareness [67] requires the mobilization of material and immaterial resources that need a motivational background broader than the mere business case. The next sections the paper will discuss several arguments supporting the latter view in light of possible backlashes not only on organizations’ mishandling of ethical risks but also on humanity at large.

4. The Moral Case for Strategic CSR: Humanizing Stakeholders within Ethical Theories

In their cornerstone work concerning “mapping the territory” of CSR theories, Garriga and Melé [68] underline, alongside the business argument and others, ethical theories underpinning CSR as a way to reach a desirable society by focusing on the common good for its own sake. This body of literature compels the so-called “moral arguments” to support CSR. The theories belonging to this approach, mostly but not solely tied to Catholic social teaching and Aristotelian tradition [69], display the following requirements for CSR: “(1) meeting objectives that produce long-term profits, (2) using business power in a responsible way, (3) integrating social demands and (4) contributing to a good society by doing what is ethically correct” [68]. Among those theories, normative stakeholder theories have a central role in stressing the relational value of business activities and the importance of interpersonal relationships at the individual and social level as an ontological component of business organizations [70]. Notably, CSR definitions often rely on the stakeholder theory, although, when focused on profit maximization, they tend to adopt an instrumental view of stakeholders’ relations [71,72]. On the other hand, the normative stakeholder theory has largely contributed to underpinning ethical views of CSR due to its ability to stress the intrinsic value of stakeholders’ dignity and, consequently, stakeholders’ demands. Such an approach has thus been fundamental in “humanizing” stakeholders, i.e., treating others with respect to their integral humanity instead as a means to achieve economic benefits [73].
Strategic CSR also relies heavily on stakeholders’ relations as a key driver of economic and social value creation [74]. The literature on the topic has extensively addressed the role of stakeholders’ pressure in a firm strategic orientation to substantive CSR depending on the stakeholders’ salience and resource availability [43], highlighting how firms decide to engage in symbolic or substantive actions depending on the targeted groups and the pressure type applied. The bourgeoning literature of empirical studies enquiring into features and models of stakeholder management and engagement within strategic CSR as well as into its role in bettering stakeholders’ relationships relies, as mentioned, on a vast body of normative literature. Indeed, it is relying on the moral arguments for CSR that responsible behaviors can be explained as authentic strategic orientation to stakeholders as persons, with their integrity, due respect for their wellbeing, freedom, psychological, physical, and spiritual integrity [75,76]. Such an understanding helps also in addressing virtuous models of stakeholders’ engagement [77] as well as offering a suitable framework to debate the role of worldviews, system of beliefs, and ideologies—more than factual evidence—in leading executives to adopt CSR [78,79].
Moral arguments backing responsible behavior are thus relevant for strategic CSR, as they underline how the latter rarely can be pursued without a significant component of spiritual, ideal, value-laden influences, especially due to the fact that those have a crucial impact in the quality of stakeholder relations that are built [80,81]. If virtuous stakeholder engagement requires moral motives, then the latter can be understood as a prerequisite for any best practices of pursuing CSR, as it has been summarized: “Only when firms are able to pursue CSR activities with the support of their stakeholders can there be a market for virtue and a business case for CSR” [53] p. 102. Drawing on such a perspective, the business argument backing CSR would be addressed as actually resulting from a previous moral engagement rather than the major motive underpinning corporate positive social impact.

5. The Moral Case for Responsible AI

Two are the main concerns and corresponding approaches, which can be identified within the current debates concerning the business’ responsibilities for ethical AI. The first body of concerns deals with the responsibility of business actors investing and designing new AI technologies (see, for example, the case of OpenAI starting as a non-profit and quickly developing into a highly profitable organization [82]). A second body of literature deals with the responsibility of a business in implementing and using AI for any given organizational scope, incurring numerous ethical shortcomings (see, for example, the classic case of ethical implications of self-driving cars, [83] or extensive concerns about AI bias in recruiting and human resource management, [84]). Corporate responsibility thus has to be discussed both concerning the design and use of technology. An underestimation of the moral arguments backing CSR, hence the foreseeing of suitable actions, depends on two main theoretical shortcomings: first, the understanding of the firm as solely an economic actor, thus overlooking its ontological being as a moral and social body [85]; second, similarly to the reductionist perspective of the first, an understanding of technological progress as neutral and not imbued with moral considerations [86]. The collapsing of the two in the field of organizational ethics concerning AI leads to overlooking the social impact of corporate power and behavior, which affects virtually all CSR future perspectives [3].
Another moral argument relies on the intrinsic nature of ethical questions entailed by AI, which are difficult to be posed and faced by prioritizing the bottom line over social concerns. CSR practices and policies to meet the ethical challenges arising with AI do not only have to operationalize often vague lists of principles and ethical codes but also actually provide answers and set standards for fundamental questions, such as: Who should be held accountable for the design and impact of AI and how? Which are the purposes and the limits of organizational automation through AI? Which level and models of stakeholders’ engagement are needed to prevent unethical outcomes? [14,87]. Most of such questions do not bare a direct link with corporate performance, and if they do, it is actually a trade-off in between speed in technology development and the adoption and careful ethical guardrail.
Most of such questions entails a refashioning of moral cases for strategic CSR as well as CSR in general terms. For instance, the classic supply-chain ethical conundrum of exploiting the Global South is now posing in AI labor intensive sectors some of the very same issues already seen with MNCs’ use of overseas sweatshops to produce garments and low-quality products (ex., the textile industry, with scandals and boycotts, see among many others the Nike case) [88,89]. By relying on low- and under-paid labor, for example, in the Philippines, to train algorithms and ChatGPT generative AI, Scale AI is facing accusation for “data sweatshops” creation [90], while competitors, such as Enabled Intelligence (https://enabledintelligence.net/news/opinion-ai-data-sweatshops-are-bad-news-and-threaten-national-security/ (accessed on 17 December 2023)) are using the scandal to share ethical stances to differentiate themselves. Nonetheless, reputational backlashes happen only when consumers and external stakeholders have a certain level of awareness and access to relevant information, which in the case of AI might be additionally difficult, and shared knowledge is rare. Most notably, even in cases of self-described transparent organizations, the sharing of information is scarce [91].
The third issue has been raised in the form of an ethical concern over the “power concentration” of AI developers and owners. Many have discussed how big techs are pursuing ambitious AI projects, publicly warning about possible threats to humanity, while at the same time not foreseeing any change in their governance structures and internal processes so as to fundamentally alter their problem-solving capacity towards ethical shortcomings [92]. Addressing what have been called “ethical nightmares” would instead force companies to adopt advanced frameworks to address potential harms before their consequences are widespread, relying on senior executives’ integrity and guidance to keep ahead of future challenges, not perpetually trying to solve them ex post [93]. This would force them to adapt a concept of accountability that entails difficult but clear choices on governance models and pursued goals, leaving current approaches not unaltered [94]. The latter compels critically addressing the business and societal relationship, exceeding the business case for social responsibility alone, to approach a collaborative mindset between private and public interest [95,96]. Indeed, for all the above-mentioned reasons and stakes, AI is the field par excellence in which considerations other than economic ones come into play when addressing the relationship between business and society. In the most notorious case, the issue has addressed by turning public interest into private ownership instead of the other way around [82]. Concern has thus been raised on a progressive alteration of the business and societal relationship not in favor of stakeholders but rather of a few people in charge of big corporations profiting from AI’s newest frontiers; Bietti has long argued about technologies reflecting existing social power relations and being influenced in their design by power structures [20]. In the same facet, as Elliott et al. have discussed, digital corporate responsibility can face huge vested interests as stakes in a digital society, as the latter have to be faced not only through taming unwanted consequences at the micro level of communities and the general public but also at the meso and macro level of organizations as corporations and big tech owning technologies [24].
Another argument is that corporate culture is among the main critical variables to focus on to build a significant CSR strategic orientation towards ethical AI and, as such, requires beliefs and ideals about the desirability of moral behavior [23,97]. Indeed, even when irresponsibility is addressed from the standpoint of “practical ethics” (i.e., “immoral actions as violations of established habits of a culture” [57] p. 787), when deployed in organizations, it entails tolerating or failing to prevent misbehavior as a part of organizational culture, especially when it takes the form of “ordinary irresponsibility”. Furthermore, culture, relying on values, poses the latter at the center of the debate on main drivers of ethical AI. Indeed, a focus on ethical challenges leads to discussing value salient notions such as fairness, basic entitlements, inequality, and others [14] (Floridi et al., 2021 [98]) rather than on a singular principle of profit maximization and efficiency rationale. Aligning AI with such human values and evaluations is among the most difficult goals of ethical AI [17]. In addition, S.S. Lindebaum et al. have discussed (in line with the classic critique of the all-encompassing technique proposed by Adorno and Horkheimer, [99]) the ways technological advancements can bring to organizations the tendency to converge towards a singular principle of rationalization, progressively abandoning and underestimating value-laden processes and rationales [100]. The mechanization of values, just as any intrinsic humanistic feature, leads to impoverished social intercourse and outcomes, thus dehumanizing the organization itself in the medium-long run. On the ethical spectrum ranging from “use AI for good” to “malicious use of AI”, moral concerns of organizations are at the core of choosing where to collocate oneself through a responsible management of technology enrooted in the understanding of the latter not as univocal and neutral [101,102].
This body of extensive concerns about the need to converge on moral arguments supporting and backing responsible AI design and development, thus refashioning traditional CSR debates, is further detailed by scholars analyzing the washing phenomena and, particularly, washing practices concerning AI ethics. In the next few paragraphs, some exemplary cases are illustrated and debated in their relevance for strategic CSR theory implications.

6. Washing and CSR Tensions

A further argument to shift the focus from the business case to the moral underpinning of social responsibility is tied with the common arising of tensions within implementing and integrating strategic CSR. Integrating CSR into core structures, activities, policies, and processes leads to several major risks and internal shortcomings, such as tensions arising from an unexpected incompatibility of CSR goals with previous or actual business goals; inconsistent behaviors of the organization in light of organizational change; clashing of views concerning competing decision-making rationales that make it difficult to align the economic and social goals [103]. Tensions are thus common to arise both internally [104,105] and externally [106,107]. While some scholars underline how CSR can itself be the terrain upon which to overcome and navigate tensions by developing responsible practices, overcoming a top-down approach [103], others sharply criticize approaches of strategic CSR and CSV by arguing that they rely on underestimating the occurrence of trade-off situations, rather than win–win option availability between economic and social goals [12]. Indeed, according to the main critics of CSV and strategic CSR visions, such perspectives fall short to understand how conflicting social and economic goals may be in everyday life settings.
Such tensions have a wide impact on organizations; far from being only idealistic or tied to black-or-white tendencies to behave in the mere interest of shareholders or solely moved by altruistic purposes of social impact, tensions may inform strategic decisions at all organizational levels, including a difficult dilemma rising in how to evaluate CSR policies’ convenience itself [108]. Moral problems in business activities mostly arise in the form of an ethical dilemma as well as often being concealed in their nature to decision makers themselves; rarely do moral issues arise as being clearly “black or wihite” but rather presenting contexts where the moral choice to be made results in questionable and unclear ways. Hence, practical ethical behavior depends on (at least) two dimensions of ethical business problems: clear and unclear moral judgment (evaluating what is the right thing to do) and the level of motivation (the desire to do the right thing), which both determine the degree of urgency of ethical dilemmas [109]. Initiatives and policies of “washing” have been identified as a common option, although less common than previously thought [25], chosen by corporations to face the above-mentioned tensions and try to align the business case with “symbolic” CSR strategies [108,110]. Indeed, as CSR initiatives influence a variety of stakeholders and, especially, customer relationships [111], their pretended enactment for deceiving purposes can be enlisted within its strategic deployment as well [6].
“Washing” is a polysemic term that has acquired numerous definitions and has been used to describe and analyze slightly different phenomena; while most of the literature focuses on greenwashing as “disinformation” and a communication strategy [112], some have underlined how washing can refer to more substantive practices, such as pretending ethical behavior by blaming unethical outcomes on other actors [113] and engaging in symbolic actions to divert attention from other questionable practices. Washing practices are not only confined to sustainability issues but rather are common in gender issues, sometimes under the label of “pinkwashing” [114] or “rainbow washing” concerning LGBTQ+ community support [115], most of which can now be subsumed under the term “Wokewashing”: pointing at a corporation’s sensibility and actions in support of marginalized, stigmatized, and unprivileged social groups in the attempt (authentic or pretended) to be considered “awake”—conscious and active—in fighting social inequalities [116].
Research generally confirms that washing strategies negatively affect the bottom line [117]. Greenwashing, for instance, can backlash depending on the efficacy of signaling and the attentiveness of stakeholders, such as non-governmental organizations [118], especially when it comes to gaining environmental legitimacy. Indeed, inconsistencies arising between corporate images and messages towards an external audience with internal policies and practices are easy to spot and mostly lead to perceived inauthenticity [119]. Moreover, inauthentic involvement in social issues usually leads to overstating promises and stances, which negatively affect the bottom line when such promises are not delivered and go unfulfilled [119]. Despite such a wide consensus among scholars, the motives of washing are still debated [120], along with models of stakeholders’ backlash as well as past examples of “shamed” organizations [121].

7. Ethical Digital Washing and the Need for a Moral Human Argument

Critics of current corporate involvement into self-regulation concerning the ethics of AI, especially in the forms of lists of general principles, have been arguing that such an approach is unsuitable to solve major social issues arising for business organizations. This phenomenon has been discussed as a sign of “uselessness” of the whole field of AI ethics [122]. Conversely, it has been addressed as proof of the potentiality of ethical AI research, as only a part of the identified mitigating strategies is currently being adopted by corporations [65]. Among other major concerns, the idea of “washing” practices developed on AI ethical issues is spreading within the literature.
Machinewashing, also labeled “blue-washing” and “ethical digital washing”, has been defined as “misleading information about ethical AI communicated or omitted via words, visuals, or the underlying algorithm of AI itself. Furthermore, and going beyond greenwashing, machinewashing may be used for symbolic actions such as (covert) lobbying and prevention of stricter regulation.” [28]. Ethical digital washing, as a phenomenon of ethical instrumentalization, has been further detailed as “corporate practices that co-opt the value of ethical work” by limiting ethical experts’ intervention to symbolic hiring with no internal space of maneuver, employees’ hiring policies focused on maintaining an uncritical consensus, the use of nudging techniques to divert attention from intrinsic ethical issues of certain technologies, and focusing on the ethical design of specific technologies while ignoring or defunding actions to focus on system-level unethical consequences [20].
Washing practices in AI, as counterbalancing effective ethicists’ expert intervention, can thus be conceived as a strategic shortcut to actual real commitment to human-centered technologies’ development and use. Such shortcuts can be viable and useful to businesses, as they exploit two current trends: the first, narrow approaches to ethical practices in AI; the second, existing imbalances of power. Concerning the first, as Van Maanen argues, top-down principles enlisting approaches to ethical behavior in technological advancements should be replaced by practices based on a bottom-up understanding of the human stakes involved in each critical situation. This practice would be fitter to the very nature of ethics, which resides within phronesis rather than episteme: “In contrast to episteme—whose statements have an idealized, atemporal, and necessary character—practicing phronesis is concrete, temporal, and presumptive. Phronesis is the art of judging what to do in concrete situations, without assuming that the judgments will hold for everyone, everywhere, and every time” [26], p. 9. Washing practices via ethical codes and principles lists are thus to be understood as a way to exploit internal and external unawareness of the inherent realm of ethical conduct and, consequently, the moral obligations of the corporation. Second, operationalizing principles into practices is naturally very difficult in a fast-paced changing technological ecosystem; nonetheless, critics of corporate ethical behavior are addressing power imbalances and malicious motives behind the inconsistencies and ineffectiveness of corporate action. One of the main contested terrains is companies’ ability to shape public discourse over AI promises and risks, whether by direct involvement in codes, the funding of studies, dedicated departments, or indirect influence by funding social and academic initiatives aimed at debating AI. This ethical activism by corporations in the tech field has been labeled “owning ethics”, pointing to a process of the institutionalization of such a tendency [123]. In this way, critics of washing include within it the unethical behavior of business projects aimed at co-opting and influencing the debate on AI ethics [92] in relative media coverage, scholarly enquiry, and government actions [124]. The well-known case of Timnit Gebru, fired from Google’s short-lived ethical team because of concerns raised upon machine learning threats [86], and the quick shut-down of the committee itself are only the tip of the iceberg. Such a conundrum is difficult to tackle; because of a legitimate power concentration, because of a lack of suitable legislation, and because of what has been called “the ethicist dilemma”, ethics experts occupy the difficult position of having to communicate and initiate actions facing ethical shortcomings [92].
To effectively tame machinewashing and ethical digital washing strategies, resuming the CSR underpinning debate can be highly helpful. Indeed, there is a positive and formally universal consensus over the need to ethically guardrail AI development and use, although business responsibilities in doing so are not clear and univocal. To some, the extent of organizational moral responsibility in AI is tied to the degree of social consequences on humans yielded by a certain technology [125], while others have argued for an integral responsibility in keeping control over AI, which is overall potentially threatening to humankind [126]. The latter has been developed as a “human” moral argument backing CSR; organizations have to develop ethical AI because risks of dehumanization are at stake for too large a number of people with too heavy consequences.
The extensive impact of AI on internal and external stakeholders calls for an additional analysis of stakeholders’ engagement and management in AI developing. Aiming at stakeholders’ wellbeing when confronted with AI elicit careful considerations of moral boundaries exceeding usual compliance. For example, in the case of algorithmic recruiting, its impacts on internals and externals, through dehumanizing practices and bias and discrimination perils, add further dimensions of mistrust and moral hazard than previous human misbehavior. AI governance and accountability requires corporate ethical behavior going beyond simple juridical compliance to prevent boycotts, employee distrust, and protests and other ethical troubles [91], as the stakes of AI deployment are strictly tied to spreading the dehumanization of stakeholders rather than humanizing them [73].
Indeed, claims of dehumanization perils tied to AI are multiplying from AI’s own developers and leaders: with the main case being represented by the open letter to pause AI experiments issued in 2023 on the Future of Life website with signatories ranging from tech leaders to global philosophers, as prestigious as Elon Musk and Yuval Noah Harari. Nonetheless, the implications of AI business practices are rarely fully and efficiently considered in the majority of AI ethical guidelines issued so far (Attard-Frost et al., 2023 [27]), further fueling concerns about the “let’s see how it goes” approach taken by companies releasing new technologies [102]. Against this scenario, the role of the ethicist within big tech industries, and generally in AI-implementing organizations, arises as being highly relevant to prevent “ethics to be spoken by power” instead of ethics speaking to power, i.e., to promote efficacy in tech ethical initiatives to shape organizational engagement towards them [127]. Such involvement can yield the necessary “cross-organizational awareness” that has been discussed as crucial to prevent “ethical nightmares” entailed by AI spreading [67].
Such reclaiming of the role of moral reasoning within the business and public sphere has indeed gained urgency because of “Inevitability and contingency” of big tech’s support for ethics for the latter to be effective [127]. This further sheds new centrality upon the CSR moral argument as the main argument to prevent AI dehumanization perils. At this stage, the “human argument” relies on a call to responsibility in light of perils such as losing control over the technology [128], the destiny of creative and artistic work and jobs [129], human rights’ systematic undermining [96], the affected quality of interpersonal relationships [61,130], among others. Within such a scenario, an assessment of strategic CSR needs to carefully consider whether corporate actions enhance or diminish negative and positive impacts of such disruptive challenges. Relying on moral rather than business arguments has never been so salient to keep people and their wellbeing as the target of organizational behaviors and mission [131], as washing and wrongdoings will entail much more than eventual boycotts or reputational damage.

8. Concluding Remarks

To summarize, calls to tame and prevent undesirable consequences of AI on humanity are multiplying, with corporate ethical behavior on top of its concerns. Hence, framing the stakes for corporate involvement in AI ethics becomes central to inform organizational decision making and pursue organizational AI responsibility. The moral case for responsible AI in business greatly contributes to stressing how, within the current social order, business not only has a social responsibility to comply with current and future regulatory frameworks but rather has a distinctive human responsibility to consider when evaluating ethical dilemmas and trade-offs between AI-driven increased profitability and production and its consequences in terms of stakeholders’ wellbeing. This paper contributes in identifying a distinctive “human argument” backing ethical AI design and development, contributing to debates on underpinning arguments for CSR in at least two ways: first, by identifying the limits of the business argument underpinning strategic CSR when faced with AI development; second, by discussing corporate practices of digital ethical washing as harmful to both organizational reputational and compliance dimensions as well as to social and human wellbeing; third, by suggesting corporate practices of cross-organizational involvement of ethicists as necessary to guardrail and prevent ethical AI shortcomings.
Such a “human argument” can integrate common moral arguments for CSR and strategic CSR, integrating “the business case” and overcoming its main above-mentioned limits. In particular, it supports strategic CSR beyond common considerations upon law enforcement and compliance and regardless of image return, as the latter may induce washing behavior and strategies. In this way, the ethics of AI represents an exemplary case of strategic CSR, which focuses on the comprehensive protection of human rights at every stage of design and implementation [95,96]. The discussed perspective of the paper can inform various streams of future empirical research: scholars interested in “human centered AI” development can focus on detailing which new professional roles within the organization are envisioned to ensure responsible AI development throughout all the pipeline; scholars involved in CSR policies’ evaluation can rely on the human argument to test whether ethical guidelines and codes of ethical conduct are focused only on short-term ethical risks or foresee long-term AI impact; comparative studies can be foreseen on organizations relying on extremely accentuated business arguments and organizations showing exceptional moral commitment to the ethical use of AI in order to contribute to shaping the “human argument” perspective or criticize it.
Furthermore, this view is in line with cornerstone traditions within CSR and business ethics scholarship, such as political CSR, urging businesses to protect and enhance human rights and desirable human conditions in all those cases in which the state and the public do not have the will or the power to intervene [15,132]. Future paths of research can be followed in asking how businesses are interacting with the public sphere to contribute to legal frameworks on the ethics of AI or how they are hindering responsible AI law enforcements.
The practical implications of the paper can be envisioned for all organizations confronting ethical AI challenges: first, by focusing on the long-term impact and social repercussions of decision making concerning AI without underestimating its ethical downfalls; second, by foreseeing a strategic planning of CSR strategies involving AI by either promoting external consulting on the ethical guardrail and training of technologies or internalizing ethical surveillance at all stages of AI implementation; third, by informing CSR initiatives focused on AI with a holistic view in which stakeholders would be affected and how to address them through specific programs of involvements in order to avoid dehumanization processes from arising, ultimately by diverting resources that might be intended for washing practices to ethical programs aimed to engage targeted audiences within AI transition processes.

Funding

This research received funding from the University of Urbino Carlo Bo.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. OECD. Recommendation of the Council on Artifcial Intelligence; OECD: Paris, France, 2019. [Google Scholar]
  2. Kaplan, A.; Haenlein, M. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
  3. D’Cruz, P.; Du, S.; Noronha, E.; Parboteeah, K.P.; Trittin-Ulbrich, H.; Whelan, G. Technology, Megatrends and Work: Thoughts on the Future of Business Ethics. J. Bus. Ethics 2022, 180, 879–902. [Google Scholar] [CrossRef] [PubMed]
  4. Böhm, S.; Carrington, M.; Cornelius, N.; de Bruin, B.; Greenwood, M.; Hassan, L.; Jain, T.; Karam, C.; Kourula, A.; Romani, L.; et al. Ethics at the Centre of Global and Local Challenges: Thoughts on the Future of Business Ethics. J. Bus. Ethics 2022, 180, 835–861. [Google Scholar] [CrossRef] [PubMed]
  5. Obradovich, B.N.; Powers, W.; Cebrian, M.; Rahwan, I.; Conrent, R. Beware Corporate “Machinewashing” of AI; Media MIT: Cambridge, MA, USA, 2019. [Google Scholar]
  6. Bowen, F. After Greenwashing; Cambridge University Press: Cambridge, UK, 2014; ISBN 9781139541213. [Google Scholar]
  7. Camilleri, M.; Sheehy, B. Strategic Corporate Social Responsibility. In Encyclopedia of Sustainable Management; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–3. [Google Scholar]
  8. Kuokkanen, H.; Sun, W. Companies, Meet Ethical Consumers: Strategic CSR Management to Impact Consumer Choice. J. Bus. Ethics 2020, 166, 403–423. [Google Scholar] [CrossRef]
  9. Carroll, A.B. Corporate Social Responsibility: Perspectives on the CSR Construct’s Development and Future. Bus. Soc. 2021, 60, 1258–1278. [Google Scholar] [CrossRef]
  10. McWilliams, A.; Siegel, D. Corporate Social Responsibility: A Theory of the Firm Perspective. Acad. Manag. Rev. 2001, 26, 117–127. [Google Scholar] [CrossRef]
  11. Michael Porter; Kramer, M. R. Creating Shared Value How to Reinvent Capitalism-and Unleash a Wave of Innovation and Growth. In Managing Sustainable Business; Spinger: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  12. Crane, A.; Palazzo, G.; Spence, L.J.; Matten, D. Contesting the Value of “Creating Shared Value”. Calif. Manag. Rev. 2014, 56, 130–153. [Google Scholar] [CrossRef]
  13. Stanford Encyclopedia of Philosophy. Available online: https://plato.stanford.edu/entries/ethics-ai/ (accessed on 17 December 2023).
  14. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. In Ethics, Governance, and Policies in Artificial Intelligence; Springer: Cham, Switzerland, 2021; pp. 19–39. [Google Scholar]
  15. Scherer, A.G.; Rasche, A.; Palazzo, G.; Spicer, A. Managing for Political Corporate Social Responsibility: New Challenges and Directions for PCSR 2.0. J. Manag. Stud. 2016, 53, 273–298. [Google Scholar] [CrossRef]
  16. van Wynsberghe, A. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  17. Gabriel, I. Artificial Intelligence, Values, and Alignment. Minds Mach. 2020, 30, 411–437. [Google Scholar] [CrossRef]
  18. Raisch, S.; Krakowski, S. Artificial Intelligence and Management: The Automation–Augmentation Paradox. Acad. Manag. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  19. Blackman, R. A practical guide to building ethical AI. Harvard Business Review, 15 Ocotber 2020.
  20. Bietti, E. From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics. J. Soc. Comput. 2021, 2, 266–283. [Google Scholar] [CrossRef]
  21. Vogel, J. Is There a Market for Virtue? The business case for corporate social responsibility. Calif. Manag. Rev. 2005, 47, 19–45. [Google Scholar]
  22. Camilleri, M.A. Artificial Intelligence Governance: Ethical Considerations and Implications for Social Responsibility. Expert Syst. 2023, e13406. [Google Scholar] [CrossRef]
  23. Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Special issue: Managing AI managing artificial intelligence 1. MIS Q. 2021, 45, 1433–1450. [Google Scholar] [CrossRef]
  24. Elliott, K.; Price, R.; Shaw, P.; Spiliotopoulos, T.; Ng, M.; Coopamootoo, K.; van Moorsel, A. Towards an Equitable Digital Society: Artificial Intelligence (AI) and Corporate Digital Responsibility (CDR). Society 2021, 58, 179–188. [Google Scholar] [CrossRef] [PubMed]
  25. Pope, S.; Wæraas, A. CSR-Washing Is Rare: A Conceptual Framework, Literature Review, and Critique. J. Bus. Ethics 2016, 137, 173–193. [Google Scholar] [CrossRef]
  26. van Maanen, G. AI Ethics, Ethics Washing, and the Need to Politicize Data Ethics. Digit. Soc. 2022, 1, 9. [Google Scholar] [CrossRef]
  27. Attard-Frost, B.; De los Ríos, A.; Walters, D.R. The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines. AI Ethics 2023, 3, 389–406. [Google Scholar] [CrossRef]
  28. Seele, P.; Schultz, M.D. From Greenwashing to Machinewashing: A Model and Future Directions Derived from Reasoning by Analogy. J. Bus. Ethics 2022, 178, 1063–1089. [Google Scholar] [CrossRef]
  29. Hasnas, J. The Normative Theories of Business Ethics: A Guide for the Perplexed. Bus. Ethics Q. 1998, 8, 19–42. [Google Scholar] [CrossRef]
  30. Hahn, T.; Figge, F.; Pinkse, J.; Preuss, L. A Paradox Perspective on Corporate Sustainability: Descriptive, Instrumental, and Normative Aspects. J. Bus. Ethics 2018, 148, 235–248. [Google Scholar] [CrossRef]
  31. Randles, S.; Laasch, O. Theorising the Normative Business Model. Organ. Environ. 2016, 29, 53–73. [Google Scholar] [CrossRef]
  32. Weaver, G.R.; Trevino, L.K. Normative And Empirical Business Ethics: Separation, Marriage of Convenience, or Marriage of Necessity? Bus. Ethics Q. 1994, 4, 129–143. [Google Scholar] [CrossRef]
  33. Rosenthal, S.B.; Buchholz, R.A. The Empirical-Normative Split in Business Ethics. Bus. Ethics Q. 2000, 10, 399–408. [Google Scholar] [CrossRef]
  34. Crane, A.; Matten, D.; Glozer, S.; Spence, L.J. Business Ethics: Managing Corporate Citizenship and Sustainability in the Age of Globalization; Oxford University Press: New York, NY, USA, 2019. [Google Scholar]
  35. Werhane, P.H. The Normative/Descriptive Distinction in Methodologies of Business Ethics. Issues Bus. Ethics 2019, 48, 21–25. [Google Scholar] [CrossRef]
  36. Goodchild, L.F. Toward a Foundational Normative Method in Business Ethics. J. Bus. Ethics 1986, 5, 485–499. [Google Scholar] [CrossRef]
  37. Havlinova, A.; Kukacka, J. Corporate Social Responsibility and Stock Prices After the Financial Crisis: The Role of Strategic CSR Activities. J. Bus. Ethics 2023, 182, 223–242. [Google Scholar] [CrossRef]
  38. Baron, D.P. Private Politics, Corporate Social Responsibility, and Integrated Strategy. J. Econ. Manag. Strategy 2001, 10, 7–45. [Google Scholar] [CrossRef]
  39. Camilleri, M.A. Strategic Attributions of Corporate Social Responsibility and Environmental Management: The Business Case for Doing Well by Doing Good! Sustain. Dev. 2022, 30, 409–422. [Google Scholar] [CrossRef]
  40. Vishwanathan, P.; van Oosterhout, H.; Heugens, P.P.; Duran, P.; Van Essen, M. Strategic CSR: A Concept Building Meta-Analysis. J. Manag. Stud. 2020, 57, 314–350. [Google Scholar] [CrossRef]
  41. Anlesinya, A.; Abugre, J.B. Strategic CSR Practices, Strategic Orientation and Business Value Creation among Multinational Subsidiaries in Ghana. Soc. Bus. Rev. 2022, 17, 257–279. [Google Scholar] [CrossRef]
  42. Mochales, G.; Blanch, J. Unlocking the Potential of CSR: An Explanatory Model to Determine the Strategic Character of CSR Activities. J. Bus. Res. 2022, 140, 310–323. [Google Scholar] [CrossRef]
  43. Perez-Batres, L.A.; Doh, J.P.; Miller, V.V.; Pisani, M.J. Stakeholder Pressures as Determinants of CSR Strategic Choice: Why Do Firms Choose Symbolic Versus Substantive Self-Regulatory Codes of Conduct? J. Bus. Ethics 2012, 110, 157–172. [Google Scholar] [CrossRef]
  44. Shea, C.T.; Hawn, O.V. Microfoundations of Corporate Social Responsibility and Irresponsibility. Acad. Manag. J. 2019, 62, 1609–1642. [Google Scholar] [CrossRef]
  45. Okafor, A.; Adeleye, B.N.; Adusei, M. Corporate Social Responsibility and Financial Performance: Evidence from U.S Tech Firms. J. Clean. Prod. 2021, 292, 126078. [Google Scholar] [CrossRef]
  46. Rodrigo, P.; Aqueveque, C.; Duran, I.J. Do Employees Value Strategic CSR? A Tale of Affective Organizational Commitment and Its Underlying Mechanisms. Bus. Ethics Eur. Rev. 2019, 28, 459–475. [Google Scholar] [CrossRef]
  47. Reimann, F.; Ehrgott, M.; Kaufmann, L.; Carter, C.R. Local Stakeholders and Local Legitimacy: MNEs’ Social Strategies in Emerging Economies. J. Int. Manag. 2012, 18, 1–17. [Google Scholar] [CrossRef]
  48. Belu, C.; Manescu, C. Strategic Corporate Social Responsibility and Economic Performance. Appl. Econ. 2013, 45, 2751–2764. [Google Scholar] [CrossRef]
  49. Kuokkanen, H.; Sun, W. Social Desirability and Cynicism Biases in CSR Surveys: An Empirical Study of Hotels. J. Hosp. Tour. Insights 2020, 3, 567–588. [Google Scholar] [CrossRef]
  50. Laszlo, C. Sustainable Value: How the World’s Leading Companies Are Doing Well by Doing Good; Stanford University Press: Redwood City, CA, USA, 2008. [Google Scholar]
  51. Nardi, L. The Corporate Social Responsibility Price Premium as an Enabler of Substantive CSR. Acad. Manag. Rev. 2022, 47, 282–308. [Google Scholar] [CrossRef]
  52. Kuokkanen, H.; Sun, W. Willingness to Pay for Corporate Social Responsibility (CSR): Does Strategic CSR Management Matter? J. Hosp. Tour. Res. 2023, 10963480231182990. [Google Scholar] [CrossRef]
  53. Carroll, A.B.; Shabana, K.M. The Business Case for Corporate Social Responsibility: A Review of Concepts, Research and Practice. Int. J. Manag. Rev. 2010, 12, 85–105. [Google Scholar] [CrossRef]
  54. Aguinis, H. Organizational Responsibility: Doing Good and Doing Well. In APA Handbook of Industrial and Organizational Psychology, Vol 3: Maintaining, Expanding, and Contracting the Organization; American Psychological Association: Washington, DC, USA, 2011; pp. 855–879. [Google Scholar]
  55. Lozano, R. A Holistic Perspective on Corporate Sustainability Drivers. Corp. Soc. Responsib. Environ. Manag. 2015, 22, 32–44. [Google Scholar] [CrossRef]
  56. Bansal, P.; Roth, K. Why Companies Go Green: A Model of Ecological Responsiveness. Acad. Manag. J. 2000, 43, 717–736. [Google Scholar] [CrossRef]
  57. Krkač, K.; Bračević, I. Artificial Intelligence and Social Responsibility. In The Palgrave Handbook of Corporate Social Responsibility; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–23. [Google Scholar]
  58. Kitsios, F.; Kamariotou, M. Artificial Intelligence and Business Strategy towards Digital Transformation: A Research Agenda. Sustainability 2021, 13, 2025. [Google Scholar] [CrossRef]
  59. McCarthy, B.; Saleh, T. Building the AI-powered organization. Harv. Bus. Rev. 2019, 97, 62–73. [Google Scholar]
  60. McAfee, A.; Brynjolfsson, E. Machine, Platform, Crowd: Harnessing Our Digital Future; WW Norton & Company: New York, NY, USA, 2017. [Google Scholar]
  61. Haenlein, M.; Huang, M.H.; Kaplan, A. Guest Editorial: Business Ethics in the Era of Artificial Intelligence. J. Bus. Ethics 2022, 178, 867–869. [Google Scholar] [CrossRef]
  62. Davenport, T.H.; Brynjolfsson, E.; McAfee, A.; Wilson, H.J. (Eds.) Artificial Intelligence: The Insights You Need from Harvard Business Review; Harvard Business Review: Brighton, MA, USA, 2019. [Google Scholar]
  63. Bruhn, J.G. The Functionality of Gray Area Ethics in Organizations. J. Bus. Ethics 2009, 89, 205–214. [Google Scholar] [CrossRef]
  64. Johnson, D.G. Technology with No Human Responsibility? J. Bus. Ethics 2015, 127, 707–715. [Google Scholar] [CrossRef]
  65. Stahl, B.C.; Antoniou, J.; Ryan, M.; Macnish, K.; Jiya, T. Organisational Responses to the Ethical Issues of Artificial Intelligence. AI Soc. 2022, 37, 23–37. [Google Scholar] [CrossRef]
  66. Harland, H.; Dazeley, R.; Nakisa, B.; Cruz, F.; Vamplew, P. AI Apology: Interactive Multi-Objective Reinforcement Learning for Human-Aligned AI. Neural Comput. Appl. 2023, 35, 16917–16930. [Google Scholar] [CrossRef]
  67. Blackman, R.; Niño, C. How to Avoid the Ethical Nightmares of Emerging Technology. Harvard Business Review, 9 May 2023. [Google Scholar]
  68. Garriga, E.; Melé, D. Corporate Social Responsibility Theories: Mapping the Territory. J. Bus. Ethics 2004, 53, 51–71. [Google Scholar] [CrossRef]
  69. Sison, A.J.G.; Fontrodona, J. Participating in the Common Good of the Firm. J. Bus. Ethics 2013, 113, 611–625. [Google Scholar] [CrossRef]
  70. Freeman, R.E. The Politics of Stakeholder Theory: Some Future Directions. Bus. Ethics Q. 1994, 4, 409–421. [Google Scholar] [CrossRef]
  71. Berman, S.L.; Wicks, A.C.; Kotha, S.; Jones, T.M. Does Stakeholder Orientation Matter? The Relationship Between Stakeholder Management Models and Firm Financial Performance. Acad. Manag. J. 1999, 42, 488–506. [Google Scholar] [CrossRef]
  72. Dmytriyev, S.D.; Freeman, R.E.; Hörisch, J. The Relationship between Stakeholder Theory and Corporate Social Responsibility: Differences, Similarities, and Implications for Social Issues in Management. J. Manag. Stud. 2021, 58, 1441–1470. [Google Scholar] [CrossRef]
  73. Sachs, S.; Kujala, J. Stakeholder Engagement in Humanizing Business. In Issues in Business Ethics; Springer Science and Business Media B.V.: Berlin/Heidelberg, Germany, 2022; Volume 53, pp. 559–572. [Google Scholar]
  74. Camilleri, M.A. Corporate Sustainability, Social Responsibility and Environmental Management; Springer International Publishing: Cham, Switzerland, 2017; ISBN 978-3-319-46848-8. [Google Scholar]
  75. Melé, D. Integrating Personalism into Virtue-Based Business Ethics: The Personalist and the Common Good Principles. J. Bus. Ethics 2009, 88, 227–244. [Google Scholar] [CrossRef]
  76. Argandoña, A. The Stakeholder Theory and the Common Good. J. Bus. Ethics 1998, 17, 1093–1102. [Google Scholar] [CrossRef]
  77. Kujala, J.; Sachs, S.; Leinonen, H.; Heikkinen, A.; Laude, D. Stakeholder Engagement: Past, Present, and Future. Bus. Soc. 2022, 61, 1136–1196. [Google Scholar] [CrossRef]
  78. Hafenbrädl, S.; Waeger, D. Ideology and the Micro-Foundations of CSR: Why Executives Believe in the Business Case for CSR and How This Affects Their CSR Engagements. Acad. Manag. J. 2017, 60, 1582–1606. [Google Scholar] [CrossRef]
  79. Vallentin, S.; Murillo, D. Ideologies of Corporate Responsibility: From Neoliberalism to “Varieties of Liberalism”. Bus. Ethics Q. 2022, 32, 635–670. [Google Scholar] [CrossRef]
  80. Kaplan, S. Beyond the Business Case for Social Responsibility. Acad. Manag. Discov. 2020, 6, 1–4. [Google Scholar] [CrossRef]
  81. Melé, D. The Firm as a “Community of Persons”: A Pillar of Humanistic Business Ethos. J. Bus. Ethics 2012, 106, 89–101. [Google Scholar] [CrossRef]
  82. Widder, D.G.; Whittaker, M. Open (for business): Big tech, concentrated power, and the political economy of open AI. In Concentrated Power, and the Political Economy of Open AI; SSRN: Amsterdam, The Netherlands, 2023. [Google Scholar]
  83. Wolkenstein, A. What Has the Trolley Dilemma Ever Done for Us (and What Will It Do in the Future)? On Some Recent Debates about the Ethics of Self-Driving Cars. Ethics Inf. Technol. 2018, 20, 163–173. [Google Scholar] [CrossRef]
  84. Fritts, M.; Cabrera, F. AI Recruitment Algorithms and the Dehumanization Problem. Ethics Inf. Technol. 2021, 23, 791–801. [Google Scholar] [CrossRef]
  85. Roszkowska, P.; Melé, D. Organizational Factors in the Individual Ethical Behaviour. The Notion of the “Organizational Moral Structure”. Humanist. Manag. J. 2021, 6, 187–209. [Google Scholar] [CrossRef]
  86. Martin, K. Ethical Implications and Accountability of Algorithms. J. Bus. Ethics 2019, 160, 835–850. [Google Scholar] [CrossRef]
  87. Yankovskaya, V.; Gerasimova, E.B.; Osipov, V.S.; Lobova, S.V. Environmental CSR From the Standpoint of Stakeholder Theory: Rethinking in the Era of Artificial Intelligence. Front. Environ. Sci. 2022, 10, 953996. [Google Scholar] [CrossRef]
  88. Smith, N.C.; Palazzo, G.; Bhattacharya, C.B. Marketing’s Consequences: Stakeholder Marketing and Supply Chain Corporate Social Responsibility Issues. Bus. Ethics Q. 2010, 20, 617–641. [Google Scholar] [CrossRef]
  89. Doorey, D.J. The Transparent Supply Chain: From Resistance to Implementation at Nike and Levi-Strauss. J. Bus. Ethics 2011, 103, 587–603. [Google Scholar] [CrossRef]
  90. Altenried, M. The Platform as Factory: Crowdwork and the Hidden Labour behind Artificial Intelligence. Cap. Cl. 2020, 44, 145–158. [Google Scholar] [CrossRef]
  91. Knight Wired. 2023. Available online: https://www.wired.com/story/sam-altman-officially-returns-to-openai-board-seat-microsoft/ (accessed on 30 November 2023).
  92. Sætra, H.S.; Coeckelbergh, M.; Danaher, J. The AI Ethicist’s Dilemma: Fighting Big Tech by Supporting Big Tech. AI Ethics 2022, 2, 15–27. [Google Scholar] [CrossRef]
  93. Ammanath, B.; Blackman, R. Everyone in Your Organization Needs to Understand AI Ethics. Available online: https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics (accessed on 5 January 2024).
  94. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in Artificial Intelligence: What It Is and How It Works. AI Soc. 2023. [Google Scholar] [CrossRef]
  95. Crawford, K. Atlas of AI. Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Heaven, CT, USA; London, UK, 2021. [Google Scholar]
  96. Yeung, K.; Howes, A.; Pogrebna, G. Forthcoming AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  97. Buhmann, A.; Fieseler, C. Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence. Bus. Ethics Q. 2023, 33, 146–179. [Google Scholar] [CrossRef]
  98. Floridi, L.; Cowls, J.; King, T.C.; Taddeo, M. How to design AI for social good: Seven essential factors. In Ethics, Governance, and Policies in Artificial Intelligence; Springer: Cham, Switzerland, 2021; pp. 125–151. [Google Scholar]
  99. Adorno, T.W.; Horkheimer, M. Dialectic of Enlightenment; Verso: London, UK, 1997; Volume 15. [Google Scholar]
  100. Lindebaum, D.; Moser, C.; Ashraf, M.; Glaser, V.L. Reading the technological society to understand the mechanization of values and its ontological consequences. Acad. Manag. Rev. 2023, 48, 575–592. [Google Scholar] [CrossRef]
  101. Benjamins, R. A Choices Framework for the Responsible Use of AI. AI Ethics 2021, 1, 49–53. [Google Scholar] [CrossRef]
  102. Blackmann, R. Ethical Machines Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI; Harvard Business Review Press: Brighton, MA, USA, 2023. [Google Scholar]
  103. Siltaloppi, J.; Rajala, R.; Hietala, H. Integrating CSR with Business Strategy: A Tension Management Perspective. J. Bus. Ethics 2021, 174, 507–527. [Google Scholar] [CrossRef]
  104. van Bommel, K. Managing Tensions in Sustainable Business Models: Exploring Instrumental and Integrative Strategies. J. Clean. Prod. 2018, 196, 829–841. [Google Scholar] [CrossRef]
  105. Hahn, T.; Sharma, G.; Glavas, A. Employee-CSR Tensions: Drivers of Employee (Dis) Engagement with Contested CSR Initiatives. J. Manag. Stud. 2023. [Google Scholar] [CrossRef]
  106. Høvring, C.M.; Andersen, S.E.; Nielsen, A.E. Discursive Tensions in CSR Multi-Stakeholder Dialogue: A Foucauldian Perspective. J. Bus. Ethics 2018, 152, 627–645. [Google Scholar] [CrossRef]
  107. Koep, L. Tensions in Aspirational CSR Communication—A Longitudinal Investigation of CSR Reporting. Sustainability 2017, 9, 2202. [Google Scholar] [CrossRef]
  108. Bento, R.F.; Mertins, L.; White, L.F. Ideology and the Balanced Scorecard: An Empirical Exploration of the Tension Between Shareholder Value Maximization and Corporate Social Responsibility. J. Bus. Ethics 2017, 142, 769–789. [Google Scholar] [CrossRef]
  109. Geva, A. A Typology of Moral Problems in Business: A Framework for Ethical Management. J. Bus. Ethics 2006, 69, 133–147. [Google Scholar] [CrossRef]
  110. Aqueveque, C.; Rodrigo, P.; Duran, I.J. Be Bad but (Still) Look Good: Can Controversial Industries Enhance Corporate Reputation through CSR Initiatives? Bus. Ethics Eur. Rev. 2018, 27, 222–237. [Google Scholar] [CrossRef]
  111. Ioannou, I.; Kassinis, G.; Papagiannakis, G. The Impact of Perceived Greenwashing on Customer Satisfaction and the Contingent Role of Capability Reputation. J. Bus. Ethics 2023, 185, 333–347. [Google Scholar] [CrossRef]
  112. Schultz, M.D.; Seele, P. Business Legitimacy and Communication Ethics: Discussing Greenwashing and Credibility Beyond Habermasian Idealism. In Handbook of Business Legitimacy; Springer International Publishing: Cham, Switzerland, 2020; pp. 655–669. [Google Scholar]
  113. Pizzetti, M.; Gatti, L.; Seele, P. Firms Talk, Suppliers Walk: Analyzing the Locus of Greenwashing in the Blame Game and Introducing ‘Vicarious Greenwashing’. J. Bus. Ethics 2021, 170, 21–38. [Google Scholar] [CrossRef]
  114. Lubitow, A.; Davis, M. Pastel Injustice: The Corporate Use of Pinkwashing for Profit. Environ. Justice 2011, 4, 139–144. [Google Scholar] [CrossRef]
  115. Gutierrez, L.; Montiel, I.; Surroca, J.A.; Tribo, J.A. Rainbow Wash or Rainbow Revolution? Dynamic Stakeholder Engagement for SDG-Driven Responsible Innovation. J. Bus. Ethics 2022, 180, 1113–1136. [Google Scholar] [CrossRef]
  116. Sobande, F. Woke-Washing: “Intersectional” Femvertising and Branding “Woke” Bravery. Eur. J. Mark. 2019, 54, 2723–2745. [Google Scholar] [CrossRef]
  117. Walker, K.; Wan, F. The Harm of Symbolic Actions and Green-Washing: Corporate Actions and Communications on Environmental Performance and Their Financial Implications. J. Bus. Ethics 2012, 109, 227–242. [Google Scholar] [CrossRef]
  118. Berrone, P.; Fosfuri, A.; Gelabert, L. Does Greenwashing Pay Off? Understanding the Relationship Between Environmental Actions and Environmental Legitimacy. J. Bus. Ethics 2017, 144, 363–379. [Google Scholar] [CrossRef]
  119. Ahmad, F.; Guzmán, F.; Al-Emran, M. Brand Activism and the Consequence of Woke Washing. J. Bus. Res. 2024, 170, 114362. [Google Scholar] [CrossRef]
  120. Foss, N.J.; Klein, P.G. Why do companies go woke? Acad. Manag. Perspect. 2022, 37, 351–367. [Google Scholar] [CrossRef]
  121. Warren, D.E. Woke Corporations and the Stigmatization of Corporate Social Initiatives. Bus. Ethics Q. 2022, 32, 169–198. [Google Scholar] [CrossRef]
  122. Munn, L. The Uselessness of AI Ethics. AI Ethics 2023, 3, 869–877. [Google Scholar] [CrossRef]
  123. Metcalf, J.; Moss, E. Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Soc. Res. Int. Q. 2019, 86, 449–476. [Google Scholar] [CrossRef]
  124. Gerdes, A. The Tech Industry Hijacking of the AI Ethics Research Agenda and Why We Should Reclaim It. Discov. Artif. Intell. 2022, 2, 25. [Google Scholar] [CrossRef]
  125. Zerilli, J.; Knott, A.; Maclaurin, J.; Gavaghan, C. Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard? Philos. Technol. 2019, 32, 661–683. [Google Scholar] [CrossRef]
  126. Hooker, J.; Kim, T.W. Humanizing Business in the Age of Artificial Intelligence. In Humanizing Business. Issues in Business Ethics; Dion, M., Freeman, R.E., Dmytriyev, S.D., Eds.; Springer: Cham, Switzerland, 2022; Volume 53, pp. 601–613. [Google Scholar]
  127. Hu, L. Tech Ethics: Speaking Ethics to Power, or Power Speaking Ethics? J. Soc. Comput. 2021, 2, 238–248. [Google Scholar] [CrossRef]
  128. Kim, T.W.; Maimone, F.; Pattit, K.; Sison, A.J.; Teehankee, B. Master and Slave: The Dialectic of Human-Artificial Intelligence Engagement. Humanist. Manag. J. 2021, 6, 355–371. [Google Scholar] [CrossRef]
  129. Amabile, T.M. Creativity, artificial intelligence, and a world of surprises. Acad. Manag. Discov. 2020, 6, 351–354. [Google Scholar] [CrossRef]
  130. Huang, M.-H.; Rust, R.; Maksimovic, V. The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI). Calif. Manag. Rev. 2019, 61, 43–65. [Google Scholar] [CrossRef]
  131. Sandelands, L. The Business of Business Is the Human Person: Lessons from the Catholic Social Tradition. J. Bus. Ethics 2009, 85, 93–101. [Google Scholar] [CrossRef]
  132. Scherer, A.G.; Neesham, C.; Schoeneborn, D.; Scholz, M. Guest Editors’ Introduction: New Challenges to the Enlightenment: How Twenty-First-Century Sociotechnological Systems Facilitate Organized Immaturity and How to Counteract It. Bus. Ethics Q. 2023, 33, 409–439. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fioravante, R. Beyond the Business Case for Responsible Artificial Intelligence: Strategic CSR in Light of Digital Washing and the Moral Human Argument. Sustainability 2024, 16, 1232. https://doi.org/10.3390/su16031232

AMA Style

Fioravante R. Beyond the Business Case for Responsible Artificial Intelligence: Strategic CSR in Light of Digital Washing and the Moral Human Argument. Sustainability. 2024; 16(3):1232. https://doi.org/10.3390/su16031232

Chicago/Turabian Style

Fioravante, Rosa. 2024. "Beyond the Business Case for Responsible Artificial Intelligence: Strategic CSR in Light of Digital Washing and the Moral Human Argument" Sustainability 16, no. 3: 1232. https://doi.org/10.3390/su16031232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop