Benefits or concerns of AI: A multistakeholder responsibility

.


Introduction
Artificial Intelligence (AI) rapidly transforms various aspects of our lives, influencing how we communicate, access information, work, and make decisions.While AI offers substantial benefits, it raises ethical, legal, and social concerns (Choi & Moon, 2023;Xu, 2019), including bias in AI algorithms and potential job market impacts.Ensuring responsible AI use is a multifaceted challenge involving governments, businesses, civil society organizations, individual users (Liu & Maas, 2021), and AI innovators.This entails balancing stakeholder interests and adopting AI responsibly (Ghallab, 2019).Development and adoption of AI is influenced by stakeholders' interests, knowledge, power, and relations to achieve their goals (Gazheli, Antal, Bergh, and van den, 2015;Rotmans et al., 2001).AI adoption not only transforms our daily routines but also societal structures and social relations (Kilian, Ventura, & Bailey, 2023;Makridakis, 2017;Gurstein, 1985;Boden, 1984).Stakeholder interests are pivotal in managing complex socio-technical system transitions (Scholz, Spoerri, & Lang, 2009).To address AI's societal impacts, research is needed to establish causal relations between stakeholder interests, behaviours, and outcomes of AI adoption on society.Findings of this review article offers a structured synthesis of existing literature defining AI, and its societal impacts (benefits and concerns).The review leads us to theorizing a multi-stakeholder framework that can be applied to attributing the nature and extent of the impacts of AI adoption to stakeholders' interests.

A multi-stakeholder model theoretical framework for responsible AI
As a result of this literature review, we introduce a multistakeholder theoretical framework (Fig. 1) by extending Freeman's Stakeholder Theory (1984) to the broader context of AI adoption.The proposed multistakeholder theoretical framework (Fig. 1) implies a causal relation between the interests of the stakeholders and impacts of AI.The theoretical framework (Fig. 1) can be read as "The nature (positive or negative) and level (high or low impact) of AI adoption depend on stakeholder interests, reflecting their roles in the societal network.".As shown in the diagram (Fig. 1), three generic categories of stakeholders of AI are highlighted (1) AI supply chain actors, (2) Users of AI, and (3) AI Regulators and governance actors.This categorization of the stakeholders of AI is based on the similarity of interests according to the type of stake in AI.The term 'Impacts of AI' refer to the benefits of AI (Table 3) and concerns of AI (Table 4).
Application of Stakeholder Theory (Freeman, 1984) in context of 'Responsible AI' could provide useful insights for managing the impacts of AI on society.One of the key features of stakeholder management theory is that it recognizes that different stakeholders have different levels of power, influence, and legitimacy (Osborne & Gaibler, 1992;Gomes, 2006).Freeman (1984) defined stakeholders as "any individual, group or organization who has interest in the functions of a firm and on whom the firm depends for achieving its goals" (Freeman, 1984;Freeman, Harrison, & Wicks, 2007;Harrison, Freeman, & de Abreu, 2015).Stakeholder Theory (ST) sees all stakeholders belonging to a single complex interconnected system of owners and users of the organization, whereby every stakeholder's action has an impact on every other stakeholder.For example, the developers of AI systems may have more power and influence over the design and deployment of AI systems, while users may have less control but a more of a vested interest in the AI that they use.Similarly, governance stakeholders, such as regulators, may have a more formal role in managing concerns of AI, but less direct involvement in the day-to-day operations of AI systems.

Methods
Due to the multidisciplinary nature of the subject and the abundance of recent literature, a semi-systematic literature review was chosen as the research method for this article.A comprehensive search strategy was applied to identify relevant literature from various sources, and a predetermined set of inclusion and exclusion criteria was used to ensure academic quality standards were met.Selection criteria included thematic relevance, year of publication, and source validity, with an emphasis on recent publications (from 2013 to 2022).However, for certain topics like the origin of AI and ethics theories, historical literature was considered.The selected literature was categorized into four thematic groups based on their subject matter (Table 1).A standardized process was employed for synthesizing and analyzing the studies, using structured illustrations and summary tables for reporting the results.
This comprehensive approach to study AI literature from its origins to its effects encompassing state-of-the-art discussions on AI in everyday life provides a context for understanding AI's current and future impacts, including its development, motivations, historical influences, and evolving nature.The historical perspective aids in uncovering technology's embedded assumptions, values, biases, and evolutionary journey.Moreover, it identifies the driving forces behind AI's development and anticipates its future impact on society.Stahl, Schroeder, and Rodrigues (2023) highlight the importance knowing AI's technical intricacies for the social scientists.

Findings
Key findings from the literature review are presented below in two subsections.The first (4.1) is based on a review of definitions of AI resulting into a comparative analogy between intelligent machines and humans.The second sub-Section (4.2) consolidates diffused discussions on the impacts of AI, subsequently establishing the theoretical foundation for the multi-stakeholder framework concerning

The definition of Artificial Intelligence
Artificial Intelligence is a progressive effort to artificially create cognitive abilities in machines that are possessed by living beings (Schank & Abelson, 1977;Schank, 1987;Trappl, 1986).The term 'Artificial Intelligence' was first coined by John McCarthy in 1956 at a conference called "The Dartmouth Summer Research Project on Artificial Intelligence" (Trappl, 1986).However, it is believed that research on AI started as early as 1947.An English mathematician Alan Turing discussed the concept of AI during one of his lectures in 1947 and expressed his views that AI can be best developed by advancing computer programming (Mccarthy, 2007).In 1983, a survey was conducted to identify the definition of AI.A total of 143 different definitions were collected through the survey.The survey findings pointed towards three classes of definitions of AI (Table 2).Each of these classic definitions of AI is believed to have laid foundations for the three fundamental approaches in research and development of AI.
However, in the literature it is often reported that the definition of AI remains debatable (Monett et al., 2020;Bhatnagar et al., 2018;Nilsson, 2009;Brachman, 2006;Hearst & Hirsh, 2000;Allen, 1998;Kirsh, Kirsh, 2018).If we think about AI from the process perspective, we may generically explain Artificial Intelligence as, "the process of creating intelligence using non-biological materials and procedures".Theoretically, AI is rooted in the branch of computer sciences (Turing, 1947;Mccarthy, 2007); and the core of it involves the subjects of computer programming and algorithm designing, mechanics, and electronics (Fulmer, 2019).It is widely accepted that artificially simulating cognitive behaviour (through algorithms) is creating artificial intelligence; but the question that remains unanswered is 'what is intelligence?There is no clear and universally accepted definition of the concept 'intelligence' (Hassani et al., 2020;Legg & Hutter, 2007); the concept of intelligence remains highly debated among the community of psychologists (Ruhl, 2023).Consequently, the term Artificial Intelligence is perceived and explained in many ways (Monett et al., 2019).From the literature review, we can draw a conclusion that the lack of consensus in defining Artificial Intelligence is better explained by the lack of consensus in defining the psychological concept of 'intelligence'.Artificial Intelligence has always been presented as artificially created ability in the machines to act like humans.The definition of AI is somewhat progressive, and it reflects upon state-of-the-art advancements in automation.

Intelligent agents
All living organisms that can demonstrate adaptive reactions to environmental stimuli can be considered as intelligent agents (Okasha, 2023).Technically, not all automated machines can be classified as Artificially Intelligent Agents (Owen-Hill, 2017), but all automated machines operate with some level of autonomy, without any direct intervention by humans (Castelfranchi & Falcone, 1998).In this article, we are using the term Programmable Intelligent Agents (PIA) for the automated machines that are operated by programmable AI; and the term Autonomous Intelligent Agents (AIA) is used for the machines that are operated by the autonomous AI.Programmable Intelligent Agents (PIA) can perform only those specific tasks which they are programmed for, whereas Autonomous Intelligent Agents (AIA) can autonomously collect information from its environment, using sensors; analyze information and take a decision to act, using algorithms; and execute the decision on its own, using the actuators 1 (Russell & Norvig, 1995;Oskouei, Varzeghani, & Samadyar, 2014).A driverless car is an example of Autonomous Intelligent Agent (AIA), and household devices such as a smart washing machine is an example of Programmable Intelligent Agent (PIA).
In any Intelligent Agent (living or mechanical) there are three fundamental characteristics: the ability to sense from interaction with the environment; the ability to take a decision in response to interaction with the environment; and the ability to act on the decision (Russell & Norvig, 1995).Therefore, Human Intelligent Agents can be compared with Programmable, and Autonomous AI is considered a form of applied epistemology and ontology, which are branches of philosophy dealing with knowledge and logic (Havel, 1987;Trappl, 1986;Tubbs, 2016;Neuhaus, 1991).The McCulloch-Pitts neuron model laid the foundation for artificial neural networks (Pospíchal & Kvasnička, 2015;Wilson, 2009).

Computer Science (Development of AI)
The study of natural cognition, behaviour, and relationships provides insight and guidance for artificially simulating cognition, which is the foundation of AI.The medium for this simulation is algorithms, which is a subject of computer science (Fulmer, 2019;Mccarthy, 2007;Turing, 1947).

Engineering and Technology (Deployment of AI)
AI algorithms are then deployed in various devices and machines to assist humans in a variety of tasks (Alsamhi S. H. et al., 2019).The design and development of AI-enabled machines also integrates the fields of engineering (Mihret, Mihret, 2020), such as electronics engineering, mechanical engineering, and robotics, with computer science.

Social Sciences (Impacts of AI and Governance of AI)
AI is integrated into everyday life through a range of engineered gadgets and machines.How we use technology explains its impacts on the society (Feher & Katona, 2021) which establishes the link between the subjects of engineering technology, and the social sciences.
Source: Author 1 An actuator is a mechanical part that transforms original source of energy (usually electrical energy) into mechanical energy causing movement in a machine (Tondu, 2015).

S. Sharma
Intelligent Agents, as all three types of intelligent agents can independently react to their environment and all three exhibit an evolutionary development (Eiben & Smith, 2015).This comparative perspective highlights at the importance of behavioral aspects of intelligent agents.Interaction between intelligent agents establish a behavioral approach between them (Pudane, Lavendelis, and Radin, 2017).The behavioral impacts emerging from interactions between humans and intelligent machines could lead to complex circumstances for humans, including ethical implications and well-being (Mishra, Pattanayak, Shankar, Murthy, & Singh, 2023).

Benefits and concerns of AI
There is a vast body of literature discussing benefits and concerns of AI, reporting experiences from many different sectors.Therefore, to synthesize key knowledge on benefits and concerns of AI, 20 articles form the SCOPUS database, published within the last 10 years (2013 to 2022) were selected for a thorough review.The selection of articles was based on the number of citations.It included 10 most cited articles retrieved in response to the search query ["Artificial Intelligence" AND "Automation" AND "Benefits"] and similarly, 10 most cited articles retrieved in response to the query ["Artificial Intelligence" AND "Automation" AND "Concerns"].

Benefits of AI
Angelopoulos et al. ( 2020) assert that the digitalized economy brings efficiency, productivity, enhanced safety, and better-quality outputs.Wang et al. (2019) conducted an empirical analysis on AI in Polyp detection, concluding that AI is more reliable than humans in clinical tasks, irrespective of various factors.They noted that AI allows standardized performance.Egger, Ley, and Hanke (2019) highlighted AI's unique potential in recognizing human emotions, playing a critical role in personalizing therapy and guiding rehabilitation.Jha, Doshi, Patel, and Shah (2019) state that Artificial Neural Network (ANN) systems benefit agriculture with improved crop monitoring, weed detection, and analyzing climatic impacts.AI and robotics in agriculture boost efficiency and productivity (Talaviya, 2020).Buhalis et al. (2019) argue that AI deployment in the tourism industry enhances service quality by analyzing tourist preferences.Raisch and Krakowski (2021) emphasize AI's benefits for businesses, including productivity and innovation.Borges et al. (2020) support this message, stating AI and automation offer unique business advantages.Pan & Zhang (2021) argue that AI in construction engineering and management (CEM) enhances safety for construction workers.Panagiotopoulos & Dimitrakopoulos (2019) also highlight AI's role in creating a safer commuting environment.

A classification of the benefits of AI
In the review of the literature, it was found that the researchers from various fields have consistently reported similar advantages associated with the adoption of AI.Based on a thorough examination of selected articles, a list of AI benefits was compiled, and benefits of AI were categorized into four distinct groups.The classification, as outlined in Table 3, encompasses the following categories: (1) Growth and Profits, (2) Performance, Ease, and Convenience, (3) Safety, and (4) Sustainability.Under the "Growth and Profits" category, benefits of AI that aid users in expanding their commercial objectives and increasing their profits were grouped.The second

S. Sharma
category, "Performance, Ease, and Convenience," pertains to enhancements in performance, greater accuracy, increased speed, time, and cost savings.The "Safety benefits" category includes advancements in public safety through the implementation of intelligent surveillance systems, road safety measures, and more.Lastly, the "Sustainability benefits" category focuses on the impact of AI adoption on aspects such as food security, job creation, energy conservation, and the promotion of transparency and equality in society.

Concerns of AI
According to Longoni, Bonezzi, and Morewedge (2019) when medical care is provided by AI, patients express distrust on the treatment because they think AI is not capable of sensing their uniqueness.Egger et al. (2019) finds facial recognition AI unable to accurately interpret emotions, leading to possible errors and biases in decision-making.Shareef et al. (2018) asserts the necessity of strong connectivity for Home Energy Management Systems (HEMS) for reliable functioning.Pan & Zhang (2021) emphasize that the effectiveness of AI in construction engineering and management (CEM) relies on data accuracy and algorithm robustness.Angelopoulos et al., (2020) strongly argue that the concerns about (data) security, integrity, and confidence in digital systems.Wang et al. (2019) caution that AI functioning without human involvement can lead to erroneous decisions.
Similarly, Panagiotopoulos & Dimitrakopoulos (2019) reveal consumer acceptance of driverless vehicles but express concerns about its trust and privacy.Shen et al. (2020) highlight the technical challenges facing automated transport systems.Jha et al. (2019) and Talaviya et al. (2020) describe how AI in agriculture depends on data accuracy and technology robustness.Wang et al. (2019) and Campbell et al. (2020) suggest that AI will require human collaboration, and job displacement is an issue.Fleming ( 2019) raises concerns about unemployment and social anxiety in a digitalized economy due to job displacement.McClure (2018) adds that early adopters of AI may gain more benefits, leading to societal disparity.Huang & Rust (2020) discuss the significant replacement of humans by AI in various job domains, causing job displacement.Buhalis et al. (2019) argue that AI in the tourism sector may erode unique customer experiences, while Huang & Rust (2020) mention anthropomorphized social robots providing warmth to customers.Sjödin et al. (2018) warn of potential disruptions when adopting smart technologies in manufacturing sector, potentially leading to an exclusive culture within organizations.Borges, Laurindo, Spínola, Gonçalves, and Mattos (2021) observed a lack of knowledge about strategic AI use in businesses.Arguing over concerns of AI, Kamble, Gunasekaran, Ghadge, and Raut (2020) emphasizes the importance of understanding its societal impacts and the need for proper strategies in AI adoption.McClure (2018) underscores the potential inequality, social anxiety, mental health issues, and job displacement associated with AI, robotics, and automation.

A Classification of the Concerns of AI
AI systems, particularly those that are used for decision-making, could perpetuate, and amplify societal biases, leading to unfair and unjust outcomes.AI systems can be used for malicious purposes, such as surveillance, censorship, and manipulation.It is also reported in the literature that AI systems could become uncontrollable and cause harm.Additionally, AI-driven automation can cause significant job displacement, which could be disruptive for certain sectors and can lead to social anxiety.However, it was found that the numerous concerns of AI that are discussed in the literature from different academic (and scientific) disciplines are repetitive.The concerns of AI can therefore be classified into relatively fewer categories.Synthesizing the findings from the literature, the concerns of AI were systematically classified into three distinctive categories (Table 4).
S. Sharma

Key message from literature on benefits and concerns of AI
Based on insights gained from state-of-the-art literature on benefits and concerns of AI, it can be inferred that we are delegating societal control to AI, which may not be fully prepared for this responsibility.Profit-driven companies deploy AI without comprehensive knowledge of its impacts (Borges et al., 2021).Shen et al., (2020) reported similar concerns about unprepared adoption of AI.
According to Campbell et al. (2020) early adopters of AI are venturing into an uncharted territory, it cannot be guaranteed if the outcomes of adopting AI will be only (or largely) positive for them.In another recent study, Huang & Rust, (2020) conclude that businesses shall strategically deploy AI while arguing that there are different types of AIs (mechanical, thinking and feeling) and all AIs cannot perform all tasks.
From this review, we also conclude that AI concerns should not be solely attributed to AI itself; rather, these concerns are a result of integrating AI with other technologies and machines.Concerns about accountability, responsibility, and human rights arise as AI is realizing self-programming cognitive machines, which introduces unpredictability in human-AI interactions (Casares, 2018;Choi & Moon, 2023).
Society is unprepared for machines with human-like behavior, leading to challenging need for adjusting societal norms, interpersonal dynamics, regulation, and governance.While AI agents aim to assist humans, challenges remain in preventing conflicts arising from human-AI interactions.
Existing literature about impacts of AI on society implicitly presents diverse stakeholder viewpoints, underlying the attribution of AI's societal impacts to multiple stakeholders' interests and values ().

Discussions
The responsible adoption of AI is a complex and multifaceted challenge that requires involvement of a range of stakeholders to ensure that AI is adopted in a responsible manner (Kilian et al., 2023).One of the key issues surrounding the responsible AI is the knowledge gap on the governance of these technologies, which might be essential for guiding the society in embracing a new norm where AI agents play a more human-like role in our daily life (Gonzalez-Jimenez, 2018).Building upon the literature review presented in this article, two relevant discussions are presented in this section.These discussions help in explaining the meaning and significance of the findings from the literature.The discussions also inform ideas for the future scope of research on responsible AI from a multi-stakeholder perspective.In the first discussion the association between the psychological concept of 'ego' with the 'development and use of AI' is argued, while presenting views about how the ego of stakeholders may determine ethical concerns of AI.This discussion is inspired from the articles of Borges et al. (2021), Kamble et al. (2020), & Shen et al., (2020) arguing that adoption of AI in everyday life is driven by its unique benefits while adopters give lesser importance to the potential concerns of AI adoption.In the second discussion, views about responsible AI from the governance perspective are presented.

Discussion 1: psychology of interests and ethical issues of AI
The predictors of human behaviour are influenced by our interests, which in turn have direct consequences on our surroundings (Rounds & Su, 2014).The factors that shape human interests have been extensively studied in social psychology and neurology, with a widely accepted model suggesting that interests stem from individual characteristics, environmental factors, and psychological states (Krapp, 1999).According to Freud's psychoanalytical theory (Freud, 1923), human actions and behaviour are governed by the id, ego, and super-ego (Siegfried, 2014).The id represents instinctive desires, while the super-ego guides thinking by aligning instincts and desires with moral and ethical principles (Siegfried, 2014;Boag, Bazan, and Boag, 2014;Sletvold, 2013;Freud, 1923).The ego acts as an intermediary, striving to balance the demands of the id and super-ego (Siegfried, 2014;Kendra, 2023;Freud, 1923).Freud's theory suggests that the ego constantly struggles between unconscious desires and ethical principles, with alignment favoring moral decision-making that considers the needs of others and benefits society (Freud, 1923).Conversely, if the ego aligns more with the id, conflicts between human behaviour and moral values may arise (Magrì, 2022;Fricker, 2007).These conflicts can lead to defensive mechanisms that result in self-centeredness, disregard for others' autonomy, fairness, equity, and well-being, presenting ethical concerns originating from the ego (Freud, 1936).
Adoption of AI in daily life has the potential to amplify the influence of the human id (Slote, 2020).AI has the power to revolutionize lifestyles by enabling autonomous decision-making and reducing dependence on others.However, the increasing prevalence of AI is leading to a growing inclination towards individualistic living, which poses challenges to societal cohesion (Slote, 2020).Examples abound of AI enabling individuals to fulfill their desires without involving other humans, often disregarding societal norms and ethics.A notable case is Akihiko Kondo's marriage to the virtual celebrity Hatsune Miko, which exemplifies the concept of fictosexuality or AI sexuality (Aoki & Kimura, 2021).
Despite recognition of the significant ethical and safety concerns associated with AI, its adoption continues to expand due to the interests of multiple stakeholders (AI developers, suppliers, and adopters).Therefore, empirical research is essential to establish causal relationships between stakeholders' interests and the potential concerns associated with AI adoption.These studies can provide insights into the responsible behaviour of multiple stakeholders in context of AI adoption.

Discussion 2: responsible AI and responsibility of stakeholders
Artificial Intelligence (AI) has the potential to greatly impact society, the economy, and politics.It is, therefore, necessary to ensure that AI is developed and adopted in a responsible manner.The responsibility for maintaining an ethical and a safe living environment lies with every individual and stakeholder in society (Argandoña, 2011;Popa, Salanțȃ, Popa, and Salanțȃ, 2015).However, governance plays a crucial role in guiding the society (Katsamunska, Rosenbaum, 2019;Kjaer, 2004) by providing direction to stakeholders and establishing provisions for socially responsible behaviour.The current governance mechanisms are insufficient in addressing the conflicts arising from AI-human interactions (Taeihagh, 2021;Eleanor et al., 2020;King, Aggarwal, Taddeo, & Floridi, 2020).There remains a governance dilemma regarding which mechanisms can effectively address the concerns associated with AI (Cihon, Maas, & Kemp, (February, 2020).
Different governance approaches can be employed to ensure responsible development and use of AI, each with its own strengths and limitations (Roski et al., 2021;Cihon, Maas, & Kemp, (February, 2020).Self-regulation is one approach, where companies and organizations are expected to adhere to principles of social responsibility voluntarily (Roski et al., 2021).This approach allows for flexibility and innovation but may be challenging to enforce and may prioritize the interests of individual companies over broader considerations for social responsibility.Another approach is government control through state regulation, where specific laws and regulations govern the development and use of AI (Zuiderwijk et al., 2021;Cihon, Maas, & Kemp, (February, 2020).State regulation provides clarity and can prevent abuses and unethical practices, but it can be slow, inflexible, and struggle to keep up with technological advancements (Cihon, Maas, & Kemp, (February, 2020).A third approach is multi-stakeholder collaborative governance, involving governments, businesses, civil society organizations, and individual users (Gianni, Lehtinen, and Nieminen, 2022;Zuiderwijk et al., 2021).This approach aims to balance the interests of all stakeholders and ensure transparent, accountable, and responsive governance of AI.However, it can be complex and challenging due to potential conflicts of interests, and it requires high levels of cooperation and coordination among multiple stakeholders.
For encouraging responsible adoption of AI, stakeholders need clear guidance on their roles and responsibilities.Conversely, individual stakeholder interests may sometimes conflict, leading to moral concerns and hindering responsible AI practices.For example, in the autonomous vehicle industry, automakers aim to harness AI for increased safety and efficiency, while consumers and the public prioritize rigorous safety standards and ethical considerations.This conflict can lead to challenges such as inadequate safety regulations, a lack of public trust, and potential regulatory hurdles.Thus, there is a need for more research on the attribution of societal impacts of AI on stakeholders' interests in a multi-stakeholder context, as the interests of one stakeholder can have ethical and moral implications for others.
Understanding the interplay between stakeholder interests and associated benefits and concerns of AI is pivotal for informed regulatory decision-making, risk assessments, and technology evaluations.It may allow regulatory bodies to make more informed decisions based on a comprehensive understanding of how AI affects various stakeholders and the society.It may also help in attributing potential risks and benefits of AI, enabling the development of effective regulations and technology evaluation processes that is aligned with societal norms and expectations.This knowledge may enable the regulators to craft more precise policies that balance innovation and societal safety (OECD, 2022).

Conclusions
Overall conclusion of this article highlights that the responsible development, adoption and use of AI is indeed a multistakeholder responsibility.However, in context of responsible AI, governance stakeholders can play a significantly important role by setting new protocols for governing the responsible behavior of different stakeholders and users of AI.While no single approach is likely to be sufficient on its own, a combination of self-regulation, governance with regulations, and multi-stakeholder governance might be necessary to ensure that AI is developed and adopted in a responsible manner.By applying the multi-stakeholder framework model presented in this study (Fig. 1), we can explain how the different interests of these stakeholders are linked to the concerns surrounding responsible development, adoption, and use of AI.The theoretical framework can also be used as a tool to identify potential conflicts between the interests of different stakeholders.For example, developers may be more focused on creating cutting-edge AI systems, while users may be more focused on ensuring that these systems are transparent and easy to use.Conflict analysis may potentially guide formulation of effective strategies for conflict resolution.

Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Table 1
Thematic Categories for Literature Review.

Table 2
A classification of the definitions of AI.

Table 3
Benefits of AI (Summary of main findings from highest cited articles).

Table 4
Concerns of AI (Summary of main findings from highest cited articles).