Implications of the use of artificial intelligence in public governance A systematic literature review and a research agenda

,


Introduction
The expanding use of Artificial Intelligence (AI) in government is triggering numerous opportunities for governments worldwide. Traditional forms of service provision, policy-making, and enforcement can change rapidly with the introduction of AI-technologies in government practices and public-sector ecosystems. For example, governments can use AI-technologies to improve the quality of public services (Montoya & Rivas, 2019;Ojo, Mellouli, & Ahmadi Zeleti, 2019;Toll, Lindgren, Melin, & Madsen, 2019), to foster citizens' trust (Dwivedi et al., 2019), and to increase efficiency and effectiveness in service delivery (Gupta, 2019). AI may also be used by governments to generate more accurate forecasts and to simulate complex systems that allow experimentation with various policy options (Margetts & Dorobantu, 2019). Value can be created in multiple government functional areas, such as decision support, transportation, public health, and law enforcement (Gomes de Sousa, Pereira de Melo, De Souza Bermejo, Sousa Farias, & Oliveira Gomes, 2019).
At the same time, AI use in government creates challenges. While the use of AI in government may increase citizens' trust towards governments, it may also reduce citizens' trust in government (Al-Mushayt, 2019;Gupta, 2019;Sun & Medaglia, 2019) and government decisions (Sun & Medaglia, 2019). This decrease may be due to a violation of citizens' privacy or a lack of fairness in using AI for public governance (Kuziemski & Misuraca, 2020). Moreover, additional challenges arise from the lack of transparency of black-box systems, such as unclear responsibility and accountability, when AI is used in decision-making by governments (Ben Rjab & Mellouli, 2019;Dignum, 2017Dignum, , 2018Wirtz, Weyerer, & Geyer, 2019). These realities raise the stakes for governments since failures due to AI use in government may have strong negative implications for governments and society.
First, over the past few decades, the adoption of AI in the public sector has been slower than in the private sector . As a result, attention paid to AI use in government has been more recent . AI practices and digital transformation strategies from the private sector cannot directly be copied to the public sector because of the public sector's need to maximize public value (Fatima, Desouza, & Dawson, 2020). Compared to the private sector, there is less knowledge concerning AI challenges specifically associated with the public sector (Aoki, 2020;Wang & Siau, 2018;Wirtz, Weyerer, & Sturm, 2020).
Second, AI systems are becoming more complex and less predictable (Hernández-Orallo, 2014), and it is unclear for most governments how this affects public governance. In practice, most governments face limited understanding of the multifaceted implications for public governance brought about by the use of AI in government. Meanwhile, thought-leadership in the areas of governance and AI shrinks compared to the pace with which AI applications are infiltrating government globally. This knowledge gap is a critical developmental barrier as many governments wrangle with the societal, economic, political, and ethical implications of these transformations in AI (IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019).
Third, much of the existing AI research is technical in nature, studying specific technological problems and solutions in the computer science domain (Aoki, 2020). While various studies concerning AI use in government exist beyond the highly technical fields of study (e.g., Etscheid, 2019;Kankanhalli, Charalabidis, & Mellouli, 2019;Winter & Davidson, 2019), there is a scarcity of research on AI governance, policy, and regulatory issues (Thierer, Castillo O'Sullivan, & Russell, 2017;Wang & Siau, 2018). Furthermore, there is a lack of consensus on how to handle the challenges of AI associated with the public sector in the future (Wang & Siau, 2018;Wirtz et al., 2020). Wirtz et al. (2020, p. 826) state that AI governance and regulation needs to be addressed more comprehensively in public administration research. Although "researchers, practitioners, and policymakers are starting to pay attention to AI governance, policies, and regulatory issues" (Wang & Siau, 2018, p. 3; also see European Commission, 2020; Kankanhalli et al., 2019), a systematic overview of the implications of AI use in government for public governance is still lacking.
Collectively, these realizations shape the point of departure for this article. To better understand how the knowledge gaps can be addressed, this article aims to 1) systematically review the literature on the implications of the use of AI for public governance and 2) develop a research agenda. In this study, we define public governance as an inclusive term that encompasses all the rules and actions related to public policy and services (see Section 2.2). This study lays the foundation for a Government Information Quarterly special issue on the topic of implications of the use of AI for public governance (see Section 5).
This article is structured as follows. In the successive sections, we first discuss the research background, followed by the approach used for the systematic literature review and the research agenda development. Subsequently, we describe the results from our analysis of research articles concerning the public governance implications of government use of AI. We then discuss the special issue that this article introduces, systematically analyze the articles included in this special issue, and discuss these articles' contributions to the status of research in the field. Thereafter, based on our systematic literature review and analysis of articles included in this special issue, we propose a research agenda for the implications of government use of AI for public governance. Finally, we describe the conclusions drawn from this study.

Research background
This section provides the necessary background concerning the key concepts of our study, namely AI (Section 2.1), including the addressed types of AI technologies and applications, and the implications of the use of AI for public governance (Section 2.2).

Artificial intelligence
There are diverse definitions of AI systems based on a) the disciplines to which they apply and b) the phases of an AI system's lifecycleincluding research, design, development, deployment, and utilization (UNESCO, 2020). In this paper's context, the critical characteristics of an AI system lie in the technological components that provide it with the capacity to process data and information in a way that entails intelligent behavior. Therefore, this capacity may consist of aspects of learning, planning, prediction, and control (UNESCO, 2020). In practice, AI systems are comprised of algorithms and models that generate these abilities. By design, these components provide the AI system with the ability to act with some level of autonomy.
As such, and given that the focus of this paper is on public governance of AI use, we adopt a dominantyet simplifieddefinition of AI in policy-making circles, where AI is defined through "systems that display intelligent behaviour by analysing their environment and taking actions-with some degree of autonomy-to achieve specific goals" (European Commission, 2018, p. 2; High-Level Expert Group on Artificial Intelligence, 2019, p. 1). Therefore, practically, AI "refers to a range of different technologies and applications used in many ways" (Susar & Aquaro, 2019, p. 419). AI systems interact with environments that comprise both the relevant objects and the interaction rules (Thórisson, Bieger, Thorarensen, Sigurðardóttir, & Steunebrink, 2016). Tasks assigned to an agent describe which environmental situations are desired and undesired (Thórisson et al., 2016), and each agent maps sequences to actions (Russell & Norvig, 2016). In practice, AI is used daily across the government ecosystem (European Commission, 2018). UNESCO (2020) considers AI systems as "technological systems which have the capacity to process information in a way that resembles intelligent behavior" (p. 4). These systems usually include reasoning, learning, perception, prediction, planning, or control aspects. Approaches and technologies that comprise an AI system may include, but are not limited to: machine learning, including supervised and unsupervised learning (Smola & Vishwanathan, 2008;UNESCO, 2020); Artificial Neural Networks (Krenker, Bester, & Kos, 2011); fuzzy logic (Klir & Yuan, 1995;Yen & Langari, 1999); case-based reasoning (Cortés & Sanchez-Marre, 1999); natural language processing (Liddy, 2001); cognitive mapping (Eden, 1988;Golledge, 1999); multi-agent systems (Ferber & Weiss, 1999); machine reasoning (Bottou, 2014), including planning, predictive analytics, knowledge representation and reasoning, search, scheduling, and optimization; and, finally, cyber-physical systems (Baheti & Gill, 2011;Lee, 2008;Radanliev, De Roure, Van Kleek, Santos, & Ani, 2020), including internet-of-things and robotics, computer vision, human-computer interfaces, image and facial recognition, speech recognition, virtual assistants, and autonomous machines and vehicles.
Due to AI's breadth, it remains an extensive and multidisciplinary research field, rich with a vast number of papers addressing its applications and implications. These papers continue to emerge from a broad spectrum of highly technical, operational, practical, and philosophical viewpoints, to name a few. Within that wide spectrum, this paper specifically focuses on the thread in the literature that addresses the implications of the aforementioned approaches and applications of AI in public governance contexts. The paper specifically focuses on the thread of literature that involves public administration, digital government, management, information science, and public affairs. It zooms in on the articles that research the implications of AI within that context. For this reason, the systematic literature review presented in this paper intentionally excludes papers that explore "how to do" AI technically (e. g., design, develop, optimize AI approaches and applications), a perspective dominant in computer science, engineering, and applied science literature.

AI use in government: implications for public governance
In this article, we focus on the use of AI in government and on the implications of this for public governance. Building on the conceptualization developed by Fukuyama (2013), governance is defined as the activity to "make and enforce rules, and to deliver services" (p.350). Publicness is defined by the object of governance as "production and delivery of publicly supported goods and services" (Lynn, Heinrich, & Hill, 2000, p. 235). The main actors involved in public governance encompass individuals, citizens, organizations, and systems of organizations in public, private, and nonprofit sectors (Bingham, Nabatchi, & O'Leary, 2005, p. 547). These actors engage in collective decisionmaking that is constrained, prescribed, and enabled by laws, rules, and practice (Lynn et al., 2000) to achieve the object of public governance. Building on these existing definitions, we define public governance as all the rules and actions related to public policy and services.
The rise of AI use in government, coupled with increased sophistication of AI applications, is triggering many public governance questions for governments worldwide. These include challenging economic problems related to labor markets and sustainable development (OECD, 2019a;World Economic Forum, 2018); societal concerns related to privacy, safety, risk, and threats (Yudkowsky, 2008); social and ethical dilemmas about fairness, bias, and inclusion (International Labour Organization, 2019); and governance questions related to transparency, regulatory frameworks, and representativeness (OECD, 2019b). For example, how does the implementation of specific AI technologies affect how an actor is accountable and responsible when government officials make decisions based on AI-technology (Wirtz et al., 2020)? And what policies and regulations can be used to govern AI use in specific government organizations?
The public governance questions raised by AI use in government are also intermingled with complex "wicked problems" faced by governments-rising perceptions of threat by societies and digital-era political turbulence, where AI is taking the central stage (Bostrom & Yudkowsky, 2014;Fountain, 2019). The use of AI has various challenges unique to the public sector , such as the requirement that AI adoption in the public sector advances the public good (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2018). Furthermore, the use of AI in the public sector should be transparent, at least to a certain extent, to gain citizens' confidence in the AI application and to ensure that trust is deserved (Bryson & Winfield, 2017). Besides, a diverse set of stakeholders is involved, and these may have conflicting interests and agendas that add further complexity . There is also a need for "regular scrutiny and oversight that is generally not seen in the private sector" (Desouza et al., 2020, pp. 206; based on BBC News, 2019). Wirtz et al. (2020, p. 826) state that "public administration can hardly keep up with the rapid development of AI, which is reflected in the lack of concrete AI governance and legislation programs." Therefore, policymakers need to pay more attention to the potential threats and challenges posed by AI (Wang & Siau, 2018). Many of the concerns mentioned above call for better governance structures, including policy development at a governmental level.

Research approach: systematic literature review
This section describes the approach used to conduct an extensive literature review. We adopted a systematic literature review approach as defined by Kitchenham (2004), and, in the next sections, we describe the following steps: 1) study identification, 2) study selection, 3) study relevance and quality assessment, 4) data extraction and 5) data synthesis.

Step 1: identification of studies
In the first step, we determined the objectives and questions that shaped our literature review. Acknowledging that literature reviews can be used for various purposes (Sekaran & Bougie, 2016), we set our literature review objectives to 1) position the identified research relative to existing knowledge, 2) obtain useful insights on the research methods other scholars have used to study the implications of AI use in government for public governance, and 3) obtain useful insights on the implications of AI use in government for public governance. To attain the first objective of our literature review, i.e., to position the identified research relative to existing knowledge, we asked ourselves the following questions: a) In which contexts has the topic of public governance implications from AI use in government been investigated by previous research (e. g., research disciplines, regions, countries)? b) What are the objectives and contributions of previous research concerning the implications of AI use in government for public governance? c) What theories and theoretical models have been indicated (e.g., developed, used, tested, or applied) in studies concerning the implications of AI use in government for public governance?
The second objective, i.e., to obtain useful insights on research methods other scholars have used to study the implications of AI use in government for public governance, led to the following question: d) What research approaches and methods have been used in studies addressing AI use in government for public governance?
For the third objective of our literature review, i.e., to obtain useful insights on the implications of AI use in government for public governance, we formulated the following questions: e) What are the main elements of public governance affected by the use of AI? f) What are the potential benefits and challenges of the use of AI for public governance?
At first, we used three complementary sources to identify scientific studies concerning the implications of the use of AI in public governance (see Table 1): Web of Science, Scopus, and The Digital Government Reference Library (DGRL) (December 2019 version). These databases together cover more than 5000 publishers closely related to the topic under study. 1 The search was later complemented with Google Scholar and updated searches in November 2020 (see Section 3.2).

Step 2: selection of studies
In the second step, the selection of studies, we defined the search terms and the exclusion and inclusion criteria. Using the search terms from Table 1, we limited the search results to journal articles and conference proceedings written in the English language and published in 2010-2020. 2 For the Web of Science and Scopus searches, we also limited the results to particular research disciplines to identify the most relevant search results. Scopus and Web of Science use different divisions of research disciplines. For Scopus, we limited the search to four fields: social sciences, decision sciences, multidisciplinary and business, 1 These publishers include, for example, Elsevier, Springer, Wiley-Blackwell, Taylor & Francis, Sage, IEEE, Oxford University Press and Emerald among others.
2 It needs to be noted that the search covered papers published by the listed publishers no later than November 11, 2020. management and accounting. For Web of Science, we limited the search to ten research disciplines: public administration, library/information science, political science, management, communication, multidisciplinary sciences, engineering multidisciplinary, international relations, and telecommunications. By limiting our considerations to these research disciplines we excluded disciplines such as computer science, physics, and medicine, which we assumed would have resulted in highly technical articles, not necessarily pertaining to public governance, and therefore outside our study's scope. For the DGRL database, it was impossible to refine the search results based on research discipline since this database consists of a collection of references in an Endnote library. However, given that the database is limited to digital government publications, publications addressing AI in this collection are assumed to be relevant to our literature review.
We applied the search strategies mentioned above in two steps. First, in April 2020, we searched for scientific studies concerning the implications of the use of AI for public governance, which resulted in 137 papers in total (47 results in the DGRL, 22 results in Web of Science, and 68 results in Scopus). For the Scopus search, we only included the first 30 most relevant results (as assessed by the database) since we found that the first twenty papers' relevance was highest and strongly reduced thereafter. Second, considering the renewed attention to AI research, and especially the more recent attention to the implications of the use of AI for public governance, we updated the searches in Web of Science (three new relevant results), Scopus (four new relevant results), and the DGRL (one new relevant result) in November 2020, to be as inclusive as possible. Furthermore, this second step encompassed a cross-check in the Google Scholar database. We particularly examined whether any additional, relevant sources were available through Google Scholar. We examined the first 50 search results using the keywords 'AI,' 'Artificial intelligence governance,' 'Artificial intelligence policy,' and 'Artificial intelligence government policy.' This search did not lead to new results. Several search results that had already been identified through Web of Science, Scopus, and the DGRL also showed up in the Google Scholar database. In total, our search led to 107 search results. After removing duplicates, we ended up with 85 unique studies.

Step 3: study relevance and quality assessment
Step three of our systematic literature review was to assess the relevance and quality of the selected studies. This phase consisted of two main steps. First, for each of the 85 identified studies, we read the title and abstract to determine the relevance of the study using the following three criteria: 1) AI use should play a substantial or major role in the study (its research questions, objective, etc.). Studies in which the focus on AI use was minor or secondary were excluded in this phase. 2) AI use in public governance should be central to the study. If the study did not (at least partly) address AI use in the context of public governance (or as a synonym 'the public sector') it was excluded in this phase.
3) The implications of the use of AI in public governance should be discussed as the main topic. In other words, AI should not be mentioned in passing or listed superficially without being at the core of the research question. For example, if AI was discussed as a tool, technology (among others), or exclusively as an application without it being linked firmly to one or more public governance implications, the article was excluded.
Each abstract was independently examined by at least two of the authors. Minor differences of opinion were discussed and resolved in a meeting during which an agreement was reached. According to the criteria above, 48 studies were removed, and 37 studies remained.
Second, the relevance and quality of these 37 studies were assessed by reading the complete article. Each study was independently assessed by at least two of the authors, using the following quality dimensions derived from Batini, Cappiello, Francalanci, and Maurino (2009) and (Bano & Zowghi, 2015): • Accuracy: the objectives of the study are clearly stated, and data collection methods are adequately described. References support important statements in the paper. • Consistency: the design of the study is appropriate for the research objectives. The study's research questions are answered or the research objective is attained. • Completeness: the study's research approach is described in sufficient detail. • Timeliness: the study was published in the past ten years.
The inter-coder reliability was high, with almost identical results found by the different coders. The results were discussed among the authors, and an agreement was reached. At this stage, eleven studies were excluded from our short list for the following reasons: 1) the studies did not meet the above-mentioned quality criteria, 2) the studies merely contained the opinion of the author without describing any particular research approach or design, 3) the studies had an insufficient focus on the implications of the use of AI for public governance, or 4) the studies were published as short poster descriptions included in conference proceedings. This led to a final selection of 26 studies directly addressing public governance questions in relation to AI (see Fig. 1). Conforming to our inclusion and exclusion criteria, the limited number of studies remaining, despite our extensive search for relevant literature, is a finding on its own. This small number highlights the scarcity of research that examines the impact of AI on public governance.

Step 4: data extraction
To extract data from our literature review, we used a spreadsheet to record the metadata for each of the selected studies. Table 2 depicts the metadata we collected about the 26 selected studies, including descriptive information, approach-related information, quality-related information, public governance, and AI-related information. To enhance coherence as much as possible, these metadata categories have been derived from the literature review questions (see Section 3.1). "Artificial intelligence" OR "AI" OR "machine learning" OR "deep learning" OR "reinforcement learning" OR "supervised learning" OR "unsupervised learning" OR "neural networks" OR "natural language processing" OR "computer vision" or "image recognition" OR "facial recognition" or "face recognition" OR "speech recognition" OR "intelligence systems" OR "virtual assistant" OR "autonomous vehicle" OR "predictive analytics" OR "robotics" OR "self-driving") AND ("governance" OR "government" OR "public management" OR "public governance" OR "public sector" OR "public administration" OR "public policy") a a The search terms used for the literature review were identical throughout all three databases searched (Web of Science, Scopus, and DGRL) with two significant distinctions: 1) in Web of Science and Scopus these terms were searched in the article titles, while for DGRL this was not possible, and therefore these terms were searched in the article keywords; 2) searching in DGRL, all search terms on (public) governance, government and public management/sector/administration/policy, were removed given that this database is already limited to articles focusing on digital government and public governance.

Step 5: data synthesis
The final step of our study concerned the data synthesis. This step encompassed three sub-steps. First, we systematically analyzed the raw data derived through the above-mentioned literature review procedure and wrote down our findings in Section 4. Second, besides collecting metadata concerning the selected articles in our literature review, we collected the same metadata concerning the articles included in this special issue, as reported in Section 5. Third, synthesizing from the analysis of literature included in Sections 4 and 5, we developed a research agenda on the implications of the use of AI for public governance (see Section 6). The procedure used to create the research agenda recommendations is as follows. Each of the three manuscript authors first individually studied the information collected about each of the selected articles. By comparing the data derived for each metadata dimension (see Table 2) for all 26 articles, each author derived patterns and remarkable findings. These were written down, discussed, and prioritized among the three authors, and then one author took the lead in developing the draft recommendations. The other two authors reviewed and improved the research agenda where needed, leading to the final research agenda. The selection of research agenda items is based on the informed assessment of 26 articles. However, other scholars might arrive at slightly different recommendations.

Results from the systematic literature review
This section describes the results drawn from our analysis of the selected research articles that concern the implications of the use of AI for public governance. Below we report the findings from our descriptive analysis, approach analysis, quality analysis, and content analysis. The data underlying our study are publicly available through 4TU.Research Data, DOI: 10.4121/14247239.

Descriptive analysis
The first section of our analysis concerns the descriptive information.
As part of this analysis, we studied the selected studies' objectives (see Table 3), the journals and conferences where these studies were published, the years of publication, and the databases through which we found them. Most studies are exploratory. Although our literature search started with the identification of 85 articles, it ultimately led to a list of only 26 papers that were focused on the implications of AI use in government for public governance. Furthermore, all papers shortlisted according to the set criteria were published in the past three years. Although AI is not a new technology given its scholarly roots in the 1950s (Natale & Ballatore, 2020;Rossi, 2016;Wirtz & Müller, 2019), scholarly publications that focus on the implications of the use of AI for public governance specifically are still relatively limited in contrast to AI research in general.
Based on our criteria, most studies addressing the implications of the use of AI for public governance are published in journals (n = 18), and a smaller portion is published in conference proceedings (n = 8). The majority of the conference papers were found through the DGRL database. Twelve out of fourteen of the journal articles were identified through Web of Science or Web of Science combined with either Scopus and/or the DGRL. Six papers were solely found through Scopus. This finding shows that the three databases we searched provided relatively unique search results. It is useful to combine searches in these different databases to research the implications for AI use in government for public governance.
The journals in which the included studies were published varied, with nearly all journals appearing only once, except for the International Journal of Public Administration publications being included twice and publications from Government Information Quarterly being included five times. Most journals in which the papers were published concern public administration, public policy, or public management (e.g., International Journal of Public Administration, Public Management Review). Other articles were published in journals that concern information science (Government Information Quarterly), computer science and engineering ( Additional records identified through 1) searching Google Scholar and 2) updated searches in WoS, Scopus and DGRL (n = 8)

Approach analysis
This section discusses several aspects of the approaches used in the studies in our sample, including the research method(s) used, availability of underlying research data, and theory mentioned and used in the investigated studies. The studies in our sample used a large variety of research methods, although literature reviews are by far the most dominant research approach (n = 16) (see Fig. 2). Other methods used in the identified research concerning the implications of the use of AI for public governance are official document and (strategy) report analysis, case studies, assessment of existing AI projects or initiatives, interviews, expert panels, action research, website analysis, Analytic Hierarchy Process, and Systematic Literature Network Analysis.
Qualitative methods are dominant in the identified studies. More than three-quarters of the studies in our sample are qualitative (n = 21). In contrast, the remainder of the studies is quantitative (n = 2) or uses mixed methods combining qualitative and quantitative approaches (n = 3) (see Fig. 3). Only one of the studies has openly made the underlying research data available, despite the growing trend in openly sharing the underlying research data as a positive open science practice, which increases transparency and trust and allows scrutiny of the findings (Curty, Crowston, Specht, Grant, & Dalton, 2017;Enke et al., 2012;Zuiderwijk, 2015). Only one study provides an explanation for not openly sharing research data. We acknowledge that the lack of availability of underlying research data could, in addition to authors' decisions or issues such as privacy-sensitivity of data, be the outcome of the publication policies adopted by the journals and publishers concerned.
Finally, as part of our approach analysis, we examined whether the selected studies referred to any theory, and, if so, how they used the theory in their research approach. This analysis shows that only four of the 26 examined studies mention a specific theory. An example of a study that mentions theory concerns the study by Androutsopoulou et al. (2019), which uses media richness theory and channel expansion theory to support the usage of AI-enabled chatbots to improve governmentcitizen communication. Furthermore, Sun and Medaglia (2019) use Table 2 Overview of information collected about each of the selected articles.

Category
Metadata Description

Descriptive information
Article number (#) Study number, assigned in an Excel worksheet Complete reference in APA style What is the complete reference to this source? (including the author(s) of the article, the year in which it was published, the article's title and other source information) Year of publication In which year was the study published? Journal / conference Does the paper concern a journal or conference publication? In which journal or in which conference proceedings was the study published? Digital   framing as a broad theoretical lens to gather the stakeholders' perspective. Ojo et al. (2019) use technology adoption theory to support the research background and their conclusions. Wirtz et al. (2020) use regulation theory as a basis for an AI governance framework. None of the analyzed studies aim to test or extend a theory. In essence, most shortlisted studies tend to be practical in their approach and focus on conceptual frameworks. There may be several explanations for the under-theorization and  (2019) To present a novel approach along with the architecture of a supporting ICT platform for the use of AI technology (chatbots) in the public sector for improving communication between government and citizens 4 Aoki (2020) To investigate the public's initial trust in so-called "artificial intelligence" (AI) chatbots about to be introduced into use in the public sector 5 Ben Rjab and Mellouli (2018) 1. to identify the key technologies that make a smart city work (including AI), and 2. to analyze the roles of these technologies (encompassing the challenges and the opportunities) in the development of smart cities 6 Ben Rjab and Mellouli (2019) To conduct a literature review to investigate the role of AI in the different sectors of smart cities 7 Bullock (2019) To explore the impacts of AI on discretion and the potential consequences for bureaucracy and governance 8 Chen, Ran, and Gao (2019 Kuziemski and Misuraca (2020) To examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. 15 Liu, Lin, & Chen, 2019 To analyze the risks posed by 'algorithmization' of government functions to due process, equal protection, and transparency, and to assess governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. 16 McKelvey and MacDonald (2019) To summarize the two AI initiatives in Canadian public service and propose more inclusive AI governance in Canada 17 Mikhaylov, Esteve, and Campion (2018) To discuss the opportunities for and challenges of AI for the public sector. It also proposes a series of strategies to successfully manage cross-sectoral collaborations 18 Montoya and Rivas (2019) To discuss factors that may have a direct impact on the AI preparedness of Latin American and Caribbean (LAC) countries 19 Ojo et al., 2019 To examine the application of AI solutions in the context of recent public management and governance paradigms including DEG, PVM, and NPG 20 Pencheva, Esteve, and Mikhaylov (2020) To offer an in-depth review and analysis of the policy and administration literature on the role of big data and AI in the public sector as well as to suggest a future research agenda 21 Sun and Medaglia (2019) To map the challenges of adopting AI in the public sector as perceived by key stakeholders and provide guidelines for AI adoption 22 Toll et al. (2019) To analyze how AI is portrayed in Swedish policy documents and what values are attributed to the use of AI 23 Valle-Cruz, Alejandro Ruvalcaba-Gomez, Sandoval-Almazan, and Ignacio Criado (2019) To study the implications of AI in the public sector 24 Wirtz and Müller (2019) To discuss the use of AI in public management structures related to their risks and side effects and to develop an integrated framework of AI for public management 25 Wirtz et al. (2019) To establish a common definition of AI and provide an integrated overview of applications and challenges of AI in the public sector 26 Wirtz et al. (2020) To develop an integrated AI governance framework that compiles key aspects of AI governance and provides a guide for the regulatory process of AI and its application.
lack of theory development in the studied papers. First, existing theories might not be sufficiently applicable to study public governance in relation to AIfor example, because they are too generic to cover the topic. Second, we are just in the AI 'spring' (Natale & Ballatore, 2020), meaning that the expectations of AI are high and all relevant stakeholders are aboard, yet an 'AI summer' in which AI technologies are widely used and meeting the expectations is not yet a reality (Toll et al., 2019). A third explanation for the lack of theoretical underpinning in research concerning the implications of the use of AI for public governance may be that this is an area that just has not yet received much attention by the scholarly community, especially since this contemporary research area is still relatively practical and focused on applications. While theory development in AI research in general has received considerable attention, theory development or extension concerning the implications of the use of AI for public governance, is still in a starting phase.

Quality analysis
This section discusses our quality analysis. For sixteen out of the 26 studies, the research design was appropriate, and we did not have any quality concerns. For ten studies, we had minor concernsfor example, when details about the literature review approach were missing, such as a lack of information about the number of search results in each database that was searched or no mention of the quality assessment mechanisms of the examined studies. Studies for which we had significant quality concerns during the full study assessment had already been removed from our shortlist (see Section 3.3).

Content analysis
This section presents our content analysis, including the potential benefits of AI use in government (4.4.1), the challenges (4.4.2), and an analysis of the public-governance-related scope addressed in our shortlisted articles (4.4.3). We only list the potential benefits and challenges that are mentioned as the results of the examined studies and that concern the argumentation of the authors themselves; we exclude those that are cited from other sources to avoid duplication and repetition.

Potential benefits of the use of AI for public governance
In this section, we discuss the potential benefits of using AI for public governance, as identified from the articles in our sample. We identified benefits in nine categories: 1) efficiency and performance benefits, 2) risk identification and monitoring benefits, 3) economic benefits, 4) data and information processing benefits, 5) service benefits, 6) benefits for society at large, 7) decision-making benefits, 8) engagement and interaction benefits, and 9) sustainability benefits (see Table 4).
First, efficiency and performance benefits refer to enhancing government operations' efficiency (Ojo et al., 2019) and e-government services and systems (Al-Mushayt, 2019). For example, efficiency is improved by automating processes (Toll et al., 2019) and tasks (Ojo et al., 2019) or by simplifying processes using Machine Learning (Alexopoulos et al., 2019). Using AI in government also offers opportunities to resource-constrained organizations to relieve them from mundane and repetitive tasks (Kuziemski & Misuraca, 2020).
Second, risk identification and monitoring concerns making risk identification more effective using AI (Ojo et al., 2019). For instance, governments can use AI to increase monitoring of urban areas (Ben Rjab & Mellouli, 2019), to improve fraud detection (Bullock, 2019) and law enforcement (Gomes de Sousa et al., 2019), and to obtain more insight into complex and pressing problems and enhance the 'smartness' of cities (Ben Rjab & Mellouli, 2019).     (Janssen et al., 2020) The ability to inform the design and evaluation of public policies (Fatima et al., 2020) Reducing administrative burden (Fatima et al., (continued on next page) A. Zuiderwijk et al. Third, AI for public governance potentially leads to economic benefits, such as making e-government services and systems more economical (Al-Mushayt, 2019), reducing costs through workforce substitution (Wirtz & Müller, 2019), and enhancing industrial automation where robots perform complex tasks (Ben Rjab & Mellouli, 2018).
Fourth, data and information processing benefits relate to processing large amounts of data in a limited time. Big data can be processed without human intervention (Alexopoulos et al., 2019). They can be used to establish intelligent networks that model, analyze, and predict data in real-time (Ben Rjab & Mellouli, 2018).
Fifth, service benefits can be attained by improving the quality of public services (Ojo et al., 2019;Toll et al., 2019) as well as service time (Ojo et al., 2019) and productivity (Kuziemski & Misuraca, 2020). Service delivery could also potentially become more effective (Gupta, 2019), more targeted (Bullock, 2019), more accessible, and more personal (Toll et al., 2019) using AI in government. Additionally, AI could enable more proactive public service delivery models (Kuziemski & Misuraca, 2020).
Sixth, AI use in government potentially leads to benefits for society at large and generates public value (Valle-Cruz et al., 2019) -for example, by improving government ability to serve the population (Montoya & Rivas, 2019) and by improving people's quality of life (Valle-Cruz et al., 2019). Using AI in government, public administrations can address problems such as shortage of resources, the scale of operations, and standardization of government delivery systems (Dwivedi et al., 2019).
Seventh, the benefits of AI use in government concern decisionmaking benefits. Machine Learning could support government decision-makers (Alexopoulos et al., 2019) and lead to better and more accurate decision-making (Ben Rjab & Mellouli, 2019). Using AI in government, potential areas for action can be highlighted for decisionmakers (Gomes de Sousa et al., 2019). In general, AI is expected to reduce administrative burden (Fatima et al., 2020), and Big Data Algorithmic Systems can enable automatic decision making within public institutions (Janssen et al., 2020).
Eighth, engagement and interaction benefits refer to the interaction between governments and citizens. AI use in government could pave the way for better government-citizen interaction and communication , for example, in cities (Ben Rjab & Mellouli, 2018, where AI can enable virtual assistants (Ben Rjab & Mellouli, 2018). AI applications can also foster citizen trust (Dwivedi et al., 2019). They may enhance citizens' and businesses' satisfaction and trust in the quality of governance and public service (Kuziemski & Misuraca, 2020).
Ninth and finally, sustainability benefits may be realized using AI in the public sector (Toll et al., 2019), where AI can assist specifically by improving cities' treatment of natural resources (Ben Rjab & Mellouli, 2018). The use of AI in government could potentially advance a sustainable environment and natural resource management, for example, by transforming the energy sector (Fatima et al., 2020).

Potential challenges of the use of AI for public governance
In addition to the potential benefits, we also searched for challenges of AI use in government. Table 5 provides a comprehensive overview of the main challenges and presents them in eight categories: 1) data challenges, 2) organizational and managerial challenges, 3) skills challenges, 4) interpretation challenges, 5) ethical and legitimacy challenges, 6) political, legal, and policy challenges, 7) social and societal challenges, and 8) economic challenges.
First, the data challenges category refers to challenges related to the availability and acquisition of data (Alexopoulos et al., 2019;Gupta, 2019), the integration of data (Gupta, 2019), the quality of data (Toll et al., 2019) and the lack of structure and homogeneity (Alexopoulos et al., 2019). Low data quality and unclear dependencies between data and algorithms may lead to biased or skewed AI algorithms' outcomes (Janssen et al., 2020).
Second, organizational and managerial challenges include organizational resistance to data sharing (Gupta, 2019;Sun & Medaglia, 2019). Public managers may also have a negative attitude towards risk in general (Pencheva et al., 2020) and the use of AI in particular (Ojo et al., 2019). It has been found that governments cannot keep up with the rapid development of AI and that the public sector lacks adequate AI governance (Wirtz et al., 2020). Moreover, the use of AI in the public sector challenges the traditionally bureaucratic form of government (Bullock, 2019).
Third, challenges of AI use in government can be related to skills, such as employees' lack of knowledge about AI and machine learning (Ojo et al., 2019) and limited in-house AI talent (Gupta, 2019). The lack of experts (Al-Mushayt, 2019) and gaps in education for highly technical skills (Montoya & Rivas, 2019) are also mentioned. There is a need for and lack of specialists and experts with relevant skills .
Fourth, concerning interpretation challenges, the interpretation of AI results can be complex (Al-Mushayt, 2019) and may, in certain situations, lead to an information overload (Alexopoulos et al., 2019). When relying on AI and AI algorithms, policymakers may make incorrect decisions (Janssen et al., 2020). The interpretation of outcomes from AI systems becomes even more challenging when these systems are opaque, which is typically the case (Janssen et al., 2020) because this makes it difficult for civil servants to understand the system and to communicate it to citizens (Kuziemski & Misuraca, 2020).
Fifth, ethical and legitimacy challenges concern challenges related to moral dilemmas , unethical use of data (Fatima et al., 2020;Gupta, 2019), AI discrimination (Gomes de Sousa et al., 2019; , and unethical use of shared data (the latter in the context of AI in healthcare) (Sun & Medaglia, 2019). Other important themes in this category concern privacy issues (Alexopoulos et al.  . Many of these ethical challenges relate to removing the human element in essential decisions (Kuziemski & Misuraca, 2020).
Sixth, regarding political, legal, and policy challenges, AI can be used in such a way that it undermines the fundamental values of due process,   (Toll et al., 2019) Low data quality and unclear dependencies between data and algorithms may lead to biased or skewed outcomes of AI algorithms (Janssen et al., 2020) Quality and quantity of data (in the context of Machine Learning) (Alexopoulos et al., 2019) Dependence on data sources external to the organization, which may lead to bias and manipulation (Janssen et al., 2020) Sensitive data can be misused or abused ( Janssen et al., 2020) Dealing with the risk of data misuse and manipulation ( (Toll et al., 2019) Threat by not understanding how AI will work, or how it will make decisions by itself, without the help of the human being, especially when human intelligence is overcome (Valle-Cruz et al., 2019) AI system opaqueness makes it difficult for civil servants to understand the system and to communicate it to citizens (Kuziemski & Misuraca, 2020) Difficult to understand the way Big Data Algorithmic Systems work (Janssen et al., 2020) Decisions made using Big Data Algorithmic Systems (BDAS), and the AI algorithms embedded in them, may be incorrect (Janssen et al., 2020) Given the diversity of needs and the increasing digital divide, the complexity of analysis increases (Valle-Cruz et al., 2019) "Algorithmic bias" of AI when making important decisions for social development (Valle-Cruz et al., 2019)

5) Ethical and legitimacy challenges
Consequences for the population resulting from AI-based decision-making  Differences between machine versus human value judgment  No longer including the human element in important decisions (Kuziemski & Misuraca, 2020) Moral dilemmas  AI discrimination, including inequality and unfairness caused by AI applications  Lack of fairness (Kuziemski & Misuraca, 2020) Decisions taken using incorrect and unfair data ( Kuziemski & Misuraca, 2020) Ethical questions related to avoiding discrimination in judicial decisions (

Challenges
Preserving humans' privacy  Challenging to preserve privacy of data in AI systems for governments (Fatima et al., 2020) Maintaining privacy policies and protection mechanisms in place (Ojo et al., 2019) Privacy violations (Kuziemski & Misuraca, 2020) Cyber-security and violation of privacy (Wirtz & Müller, 2019) Security challenges (Toll et al.,

Challenges
Concerns about a lack of accountability (in the context of AI use for chatbots) (Aoki, 2020) It is difficult to determine who is responsible for incorrect decisions taking using Big Data Algorithmic Systems (Janssen et al., 2020) Challenge of defining and sharing responsibilities between data providers, algorithms provides and Big Data Algorithmic System operators as part of data governance ( Janssen et al., 2020) Using Big Data Algorithmic Systems, public officers may become mediators instead of decision-makers ("hidden bureaucrat") (Janssen et al., 2020) Challenges related to regulating autonomous systems (Fatima et al., 2020) Legal black box: propriety characteristics of statistical models or source codes are legally protected by trade secret statutes (Liu et al., 2019) Technical black box: the technical nature of AI techniques is characterized by an inherent lack of transparency (Liu et al., 2019) Difficulties determining the ownership of data ( Janssen et al., 2020) Costly human resources still legally required to account for AI-based decisions (in the context of AI in healthcare) ( A. Zuiderwijk et al. equal protection, and transparency (Liu et al., 2019). Since AI systems can consist of unintelligible black-box processes (Bullock, 2019), it is not always clear who is responsible for decisions made by the use of AI (Dwivedi et al., 2019;, who is accountable, and who has control (Bullock, 2019). Seventh, social and societal challenges include the effects of AI on the labor market , mostly when the human workforce is being replaced (Gupta, 2019;Valle-Cruz et al., 2019), and society's unrealistic expectations concerning AI use in government (Sun & Medaglia, 2019). AI use in government may also lead to the dehumanization of daily activities (Valle-Cruz et al., 2019), especially when robots replace human beings (Ben Rjab & Mellouli, 2018), and it may lead to more income inequality between upper-and lower-class citizens (Montoya & Rivas, 2019). The realization of these challenges can lead to decreased social acceptance of AI .
Eighth and final, economic challenges of AI use in government refer to potential harm to the economy as a result of efficiency increases (Toll et al., 2019), the replacement of humans by robots (Ben Rjab & Mellouli, 2019), and the technology infrastructure investments needed to enable data storage and collection . Although new jobs may emerge, AI use in government may also lead to a loss of employment (Toll et al., 2019).

Scope of research on the implications of AI use in government for public governance
This section describes the scope used in the selected articles, including the administrative level, the type of AI and the types of public governance implications. First, we studied the administrative level at which the selected studies address public governance implications resulting from AI use in government. Our analysis shows that most studies addressed this topic at a global level, while some studies are scoped towards the national government level (e.g., Fatima et al., 2020;Gupta, 2019;Liu et al., 2019) or towards the local government level (e. g., Aoki, 2020;Ben Rjab & Mellouli, 2018;Ben Rjab & Mellouli, 2019).
Second, we examined what type of AI the selected articles focus on. We found that most of the shortlisted studies apply a broad and inclusive use of the term 'AI'. Nine studies focus on AI in general, without mentioning the specific types of AI for which they study public governance implications. Some studies focus on public governance implications resulting from a particular type of AI (n = 7), such as Machine Learning (Alexopoulos et al., 2019) or Deep Learning (Al-Mushayt, 2019). Most studies combine the public governance perspective of AI in general with a public governance perspective of a specific type of AI (n = 10), such as public governance implications resulting from a combination of AI in general with a focus on Machine Learning and Natural Language Processing (Pencheva et al., 2020) or a focus on various AI techniques such as virtual reality, expert systems, intelligent agents, artificial neural networks, fuzzy logic, robotics, data mining, text mining, and sentiment analysis (Valle-Cruz et al., 2019). We acknowledge that these AI technologies are only a subset of the many AI technologies that exist.
Third, we discuss the types of public governance implications that we identified from the studies in our sample. Most of the shortlisted articles did not refer to public governance specifically. However, these articles did refer to a type of public governance that suits the broad and inclusive definition of public governance that we use in this study. It encompasses all the rules and actions related to public policy and service. From the reviewed articles, we identified seven forms of public governance discussed in relation to AI use in government: 1) collaborative governance, 2) organizational governance, 3) service governance, 4) participative governance, 5) governance through policy, strategy, processes, and measures, 6) governance through legislation and regulation, and 7) ethical governance (see Table 6).

Government Information Quarterly special issue concerning the implications of government use of AI for public governance
This section describes the special issue that this manuscript introduces and its relation to the dg.o2019 conference (Section 5.1). Moreover, this section provides an overview of and discusses the articles' contributions included in this special issue (Section 5.2).

Relation to the special issue and the dg.o2019 conference
Our systematic literature review lays the foundation for the special  Table 6 Forms of public governance (in relation to AI), as identified in the reviewed studies.
Form of public governance impacted by AI use in government Example of form of public governance Collaborative governance Collaborations between universities and the public and private sectors to deal with AI challenges (Mikhaylov et al., 2018)  issue that this article introduces, which highlights innovative research and practical cases from the 20th Annual International Conference on Digital Government Research (dg.o2019). The dg.o2019 conference was centered around the theme of (Public) "Governance in the Age of Artificial Intelligence." The authors of thirteen selected high-potential dg. o2019 papers were encouraged to provide a substantial expansion of their conference papers and submit the resulting manuscripts to the special issue. The manuscripts needed to have substantially updated content with regard to data, research and argumentation compared to their dg.o conference papers. After two rounds with a minimum of three blind peer reviews per round, three articles were eventually selected for this special issue. Collectively, the three articles contribute to the theme of the implications of government use of AI for public governance. Specifically, they focus on the design of an AI-based government service to improve user experiences, the enhancement of AI to match the dynamism of public policy cycles, and the utilization of AI in government to automate the identification and classification of open government data portals.

Contributions of the articles included in this special issue
The three articles included in this special issue of Government Information Quarterly extend and complement the systematic literature review findings. We analyzed the three articles similarly as the articles identified through the literature review, using the approach outlined in Section 2. Our analysis shows that, collectively, the three articles provide a diversity of objectives, approaches, data, and settings that further advance our understanding of the implications for the use of AI in public governance. In addition to describing the research objective and method of each article, we align our analysis with the critical components of content analysis in Section 4, including the type of public governance as well as the potential benefits and challenges of AI use in government.
First, "AI-based self-service technology in public service delivery: User experience and influencing factors" is the title of the article authored by Chen, Guo, and Gao (this issue). The article's main objective is to study the factors affecting user experience with government service provided by an AI-based self-service technology. The primary theoretical lens is consumer value theory. The research data collection is via a survey of citizens who have used AI-enabled administrative approval service in the Wuhou district in Chengdu, China. Statistical analysis of 379 completed surveys suggests the positive role of personalization and aesthetics as well as trust in government in user experiences.
The study by Chen, Guo, and Gao contributes to AI-based government service design to improve user experiences. It offers an example of governance via AI applications and services as a form of public governance concerning AI mentioned in Section 4. More specifically, it articulates the moderating effect of trust in government on user experiences. This study primarily demonstrates service benefits that AI brings to user experiences via personalization and efficiency improvement. Besides, AI increases efficiency that could positively impact user experiences. Simultaneously, the study underscores trust in government as an important public governance challenge of AI-based public service. This study primarily illustrates an organizational challenge to successfully implementing an AI-based technology and secondarily an economic one to increase efficiency and satisfaction. For practical application, this study provides design recommendations for AI-enabled government service by strengthening personalization, aesthetics, and trust in government.
Second, the title of the study by Valle-Cruz, Ruvalcaba-Gomez, Sandoval-Almazan, and Criado (this issue) is "Assessing the public policy-cycle framework in the age of artificial intelligence: from agendasetting to policy evaluation." The main research question is how AI impacts the public policy cycle. A substantial systematic literature review of artificial intelligence in public policy and administration research provides the background for studying AI's impact on the public policy cycle. The primary analytical approach for answering the research question is an illustrative case analysis showing how AI can impact the public policy cycle, including four stages (agenda setting, policy formulation, policy implementation, and policy evaluation). A wide range of public governance settings and policy areas is included in the literature review and cases. This article outlines a dynamic public policy cycle in which AI enhances each stage of the cycle and the cycle as an integrated dynamic.
The article by Valle-Cruz, Ruvalcaba-Gomez, Sandoval-Almazan, and Criado extends and complements governance through policy and strategy as a form of public governance in relation to AI. The extension lies in the enhancement of AI brought to the dynamism of the public policy cycle. These enhancements also constitute the potential benefits of AI for decision-making and society at large. For instance, at the agenda-setting stage, AI can assist in the prevention of policy problems. AI could help analyze a large amount of data from various sources to generate policy options for informed policy formulation. Intelligent automation can provide efficient public service at the policy implementation stage. AI can aid in the prediction and visualization of policy outcomes for facilitating timely and comprehensive policy evaluation. Simultaneously, the article acknowledges the importance of recognizing and managing the potential challenges of AI, namely algorithm-based discrimination, lack of transparency, digital divide, and the potential of using AI for social control. These challenges touch on ethical, political, and societal ones as mentioned in Section 4. Third, the article by Correa and Da Silva (this issue) is entitled "A deep-search method to survey data portals in the whole web: towards an AI machine learning classification model." This study's main objective is to develop a machine-learning method to automatically identify and catalog data portals by going through the source code of all published web pages (approximately 2.5 billion). The research effort involves developing and implementing computational techniques and machinelearning algorithms to identify and classify open data portals. This deep search's data and settings include 1650 open government data portals covering many languages and countries as represented by published web pages. It focuses on the status of research in the field as established in Section 4. In addition, it explores the implications of AI for public governance.
Correa and Da Silva's article contributes to the utilization of AI to automate the identification and classification of open government data portals as well as the creation of a comprehensive repository. This AIenabled open data effort addresses governance of data and infrastructure as an identified form of public governance in Section 4. This study shows the potential benefit of data and information processing capability (as stated in Section 4) through the efficient creation of the discussed repository and raw data provision. For data and information processing, the repository of data portals provides a critical data infrastructure for identifying data resources and implementing topic-specific research. The sharing of technical notes on deep search by this study also provides a template for creating such a repository for various public governance and policy topics such as public health, transportation, and finance. However, this type of deep search method involves data challenges, as stated in Section 4, particularly the risk of a certain degree of misclassification.
The three articles included in this special issue contribute to our understanding of the interconnectedness of AI use in government on the one hand and public governance on the other hand. Collectively, these articles offer opportunities to advance our knowledge about the use of AI in government and its implications for public governance. First, the advancement of utilizing AI as a technique to automate the creation and repository of data as public governance resources is demonstrated by the example of Correa and Da Silva's deep search methods. This study shows the potential benefit of data and information processing to public governance in relation to AI. Next, AI can offer us opportunities to streamline and, potentially, transform our approach to developing, implementing, and evaluating public policy. The dynamic public policy cycle presented by Valle-Cruz, Ruvalcaba-Gomez, Sandoval-Almazan, and Criado is a case in point. It contributes to governance through policy and strategy, while also extending the potential benefits to public policy decision making and society to improve quality of life via better public policy. Lastly, Chen, Guo, and Gao's article contributes to governance through AI applications and services. It articulates the service benefit of AI while recognizing the organizational challenges of producing and delivering such AI-enabled service. Future research can explore the interplay between AI features, type of public service, and trust in public organizations administering AI-enabled services.

A research agenda on the implications of the use of AI for public governance
Various research agendas centered around AI use in the public sector have already been developed (e.g., Dwivedi et al., 2019;Gomes de Sousa et al., 2019;Kankanhalli et al., 2019). Some of them take the perspective of using AI technology in the public sector. For example, Kankanhalli et al. (2019) discuss research areas and challenges for the combination of Internet of Things and AI to build smart governments, and Gomes de Sousa et al. (2019) present AI solutions for the public sector. The research agenda by Dwivedi et al. (2019) is not only focused on the public sector, it discusses AI implementation in a broader context, i.e., within business and management, government, public sector, and science and technology. Our research agenda complements these existing agendas by focusing specifically on the implications of the use of AI for public governance. The research agenda has been developed based on our systematic literature review and analysis of articles included in this special issue. It comprises eight process-related recommendations (Section 6.1) and seven content-related recommendations (Section 6.2) for researchers that examine the implications of AI use in public governance.

Process-related research recommendations
The eight process-related recommendations are as follows: 1) Avoid applying AI-related terms superficially in public governance sources. Researchers are advised to avoid cosmetically throwing AIrelated terms in titles of articles. One finding from our systematic review of articles with AI-related terms used in their titles, keywords, and abstracts (compared to the content of the article) is that AI terms, given their attractiveness to readers, are used superficially as buzzwords in articles examining entirely other topics. This was a clear trend in numerous studies excluded at different phases in our literature review. In fact, this was the main reason why our search criteria generated a larger number of articles seemingly dealing with AI and public governance, given their titles, keywords or abstracts, that in reality were tackling other aspects entirely and were eventually excluded. While this may be a method to expand readership, in reality, this dishonest practice hides the scarcity of research in the area. 2) Move beyond the generic focus on AI in public governance sources.
According to our review, AI was addressed in many studies generically. Although there were a few exceptions (e.g., Kuziemski & Misuraca, 2020), in many studies AI was generically addressed in relation to the type of AI studied (AI in general), the domain (not specified), the spatial and temporal dimensions of the studies (e.g., no specific country), the level of government studied (not specified), or the focus of the study. We identified a need to conduct more domain-specific studies, specific to certain areas or countries and at specific government levels in relation to AI. This should enable metaanalysis and comparisons between the findings of studies in different domains, countries, areas, and periods, among other aspects. 3) Move to methodological diversity instead of dominant qualitative methods. As was the case in the early days of digital government, an overwhelming majority of research about AI use in government and public governance implications today applies qualitative methods. Given the data-heavy nature and social embeddedness of AI applications, especially in areas of citizen-government interactions, there are clear opportunities for quantitative, data-driven, and computational research methods. This will open the door for even richer mixed research methods that may capture the comprehensive and multifaceted implications of AI for government and society. Difficulties in data access in government contexts are acknowledged, where privacy and safety concerns are real in relation to AI-dominant implementations in government (e.g., facial recognition, tracking and surveillance practices, autonomous agents, citizen-centric applications). These barriers call for innovating data access, collection, management, and anonymization methods. 4) Expand conceptual and practice-driven research from the private to the public sector. Existing research on the implications of AI use for public governance is mainly reliant on studying (or borrowing) practices and implementations from the private sector contexts and applying them (sometimes with limited oversight) to the publicsector contexts. Limited public-sector-specific conceptual frameworks are being developed. This is another opportunity for practiceoriented research in public governance areas. 5) Increase empirical research on the implications of AI use for public governance. Although considerable attention has been paid to AI technologies and speculations about the societal impacts of AI, it is not common in AI research today to contribute to empirical testing (Aoki, 2020), as confirmed by our research. The slow pace of empirical research on the public governance implications of AI use in government, contrasted with the expedited drive in practical implementations, may lead to increased biases in government decisions and responses to societal challenges, rising levels of inequality, or the generation of interventions that are neither fair nor responsive to public needs, with potentially problematic ethical implications for societies and governance. Thus, future research should extend beyond the conceptual and speculative levels and contribute to empirically testing the implications of AI use in government on public governance. 6) Go beyond exploratory research and expand explanatory research.
As an early research area, existing studies on public governance implications of AI use tend to be largely exploratory. For example, most of the studies reviewed here are either literature reviews or rely on case studies as a research method. As AI implementations start to bear fruit (or cause harm) in the government ecosystem, there is an urgent need to pursue explanatory research designs that adopt expanded empirical methods to generate operational definitions, extract meanings, and explain outcomes specifically within public governance contexts. Furthermore, given the widely discussed risks and threats in practice-oriented literature about AI implementations within public policy and public governance domainsand involving digital transformation problems at their core -AI implementations in government have real potential to generate 'wicked problems' (Fountain, 2019). This would entail creating chronic and complex problems of management or governance with prolonged and widescale socioeconomic implications. Explanatory research designs are well-positioned to address these challenges. 7) Openly share the research data used for studies on the implications of the use of AI for public governance. To boost research concerning public governance concerning AI, opening up underlying research data should become standard practice, a practice that was barely existent in the articles we systematically reviewed. On a general note, data reuse can lead to more findings from the same dataset (Joo, Kim, & Kim, 2017), to asking new questions (Wallis, Rolando, & Borgman, 2013), to testing different hypotheses (Kim & Adler, 2015), and to increasing the knowledge in the field (Joo et al., 2017). Both scholarly societies and funding organizations active in the domains of public governance are advised to incentivize and trigger more research, focusing on openness, rigor, and transparency in the diverse areas of AI and public governance. 8) Learn from applicable pathways followed by digital government (or e-government) scholarship in its early phases. At this early phase in AI and public governance scholarship, this study area seems to be following a similar pathway followed by digital government (or egovernment) scholarship in its early phases. Early on in the past two decades of digital government research, as a relatively new field of study, the lack of theoretical frameworks and rigor was a common trait (Grönlund, 2010;Heeks & Bailur, 2007;Yildiz, 2007). This trajectory has changed substantially during the past decade, where digital governance research evolved in terms of theoretical grounding, methods, rigor, and scope (Bannister & Connolly, 2015;Rodríguez Bolívar, Alcaide Muñoz, & López Hernández, 2010). While investigating the implications of the use of AI for public governance, reference theories from other disciplines may be used to enable the development of the field, which also happened in the case of 'e-government' research.

Content-related research recommendations
Content-wise, the seven main areas in which research on the public governance implications of AI use in government has been recommended by the studies included in our literature review are:  (2019), who state there is a "strong need to relook at theory and relationships based on the emergence of AI" (p. 15). Researchers in the areas of digital government, data governance, digital transformations, and information systems may want to build on the collective theoretical foundations developed in their respective fields over the past decades. As public governance remains a multidisciplinary field of research, and as AI implications in government extend to almost all socio-economic fields, it is strongly advised to expand multidisciplinary collaboration that feeds into theoretical rigor in the area. This is also recommended by Aoki (2020), who emphasizes that AI research should be conducted from an interdisciplinary perspective. 2) Investigate effective implementation plans and metrics for government strategies on AI use in the public sector. To quote from Kuziemski and Misuraca (2020) "the role of government as 'user' of AI technologies has received far less attention than the 'regulator' role in the strategies adopted so far" (p. 3). This is also visible in countries' strategic AI plans. National strategic AI plans are typically sparse in implementation details. Hence, it is challenging to assign responsibilities and address accountability issues in the use of AI in the public sector (Fatima et al., 2020). Realistic and tangible metrics for measuring such projects' progress and success are also usually lacking (idem). This confirms the findings from Wirtz et al. (2020). While looking for frameworks focused on the governance or regulation of AI risks and challenges, Wirtz et al. (2020) were only able to identify two such frameworks, namely those of Gasser and Almeida (2017) and Rahwan (2018). They state that these models fail to address how to design and implement AI governance or how to address government responsibilities in governance implementation. There is a need for research and common frameworks on the potential impact of the use of AI in the public sector (Kuziemski & Misuraca, 2020), including research into the perceived trade-offs between various values related to AI use in government, such as transparency and system performance (Dwivedi et al., 2019). We recommend scientific research to help address these gaps and to advise policy makers of AI strategies on 1) how the implementation of AI in the public sector can be realized, 2) what useful targets for such strategies may be, and 3) what trade-offs to consider. For example, Dwivedi et al. (2019) state that traditional long-term strategies do not work for rapidly changing technologies, including AI. They recommend developing flexible short-to medium-term AI strategic plans that can adjust for changes and breakthroughs in the technology. 3) Investigate best practices in managing the risks of AI use in the public sector. The selected articles revealed many risks for the use of AI for public governance (see Section 4.4.2). These include dealing with the risk of data misuse and manipulation (Gomes de Sousa et al., 2019) and the usage of automated risk assessment tools in ways that counteract the rudimentary values of due process, equal protection, and transparency (Liu et al., 2019). In the context of cross-sectoral collaboration around AI, Mikhaylov et al. (2018) refer to the various approaches to managing risk in the public and private sectors. Risks may be dealt with in many different ways, depending on the situation. Our literature review revealed a lack of research into best practices in managing AI use risks in the public sector. Suppose public governance scholarship on AI use in government does not keep up with and expedite practical development concerning AI implementation in government worldwide. In that case, the lack of evidence-based, contextual, and localized research may lead to significant failures since failures due to the use of AI in government may have substantial negative implications for governments and society.
In research efforts related to risk management for AI use in the public sector, scholarship should not neglect critical, ethical issues related to public governance implications of AI use, including fairness, explainability, transparency, accountability, bias, privacy, safety, security, and societal impact. 4) Examine how governments can better engage with and communicate their AI strategic implementation plans to stakeholders. Fatima et al. (2020) state that governments should be more proactive and engage with stakeholders to examine their data needs, taking into account privacy and security issues. More specifically, researchers should contribute to these engagement strategies and advise policymakers in the creation of plans for communicating the implications of AI use in government to citizens, companies, and other relevant stakeholders. 5) Investigate a large diversity of possible governance modes for AI use in the public sector. The AI policy debate focuses mainly on a limited selection of governance modes, such as voluntary standards and selfgovernance (Kuziemski & Misuraca, 2020). The literature on public governance implications of AI use in government largely neglects other forms for governance, such as power-related considerations (idem), which should be a goal for future research concerning the implications of the use of AI for public governance. 6) Research how the performance and impact of public sectors' AI solutions can be measured. Several articles address topics related to the performance and impact of AI use in the public sector (e.g., Ben Rjab & Mellouli, 2019; Chen et al., 2019;Dwivedi et al., 2019). More specifically, Dwivedi et al. (2019) write that the impact of AI's social and economic organization on individuals and society is not yet clear. Ben Rjab and Mellouli (2019) also state that AI's impact on society needs to be assessed. While this impact is still being measured, decision-makers need to be aware that these impacts can be both positive and negative (Dwivedi et al., 2019), and may affect not only individuals but also public sector organizations themselves (Bullock, 2019;Chen et al., 2019). Finally, Dwivedi et al. (2019) call for research on how AI's impact on decision-making performance can be measured. They refer to a lack of standards for AI performance assessment. 7) Examine the impact of scaling up AI usage in the public sector. Our study showed that AI allows for increasing government operations' scale (Alexopoulos et al., 2019;Dwivedi et al., 2019). However, several studies also refer to scalability problems (Kankanhalli et al., 2019). In particular, Dwivedi et al. (2019, p. 29) state that "the velocity and scale of AI impact is so high that it rarely gives the public policy practitioners sufficient time to respond". Agile governance is proposed as one of the solutions for this challenge (idem). We recommend future research on the implications of AI use for public governance to examine this further, simultaneously with alternative responses to scale-related challenges.

Conclusions
To lay the foundation for the special issue that this article introduces, we 1) present a systematic review of existing literature on the implications of the use of AI in public governance and 2) develop a research agenda. We carried out a systematic literature review identifying relevant and high-quality research from four databases, eventually resulting in the selection of 26 articles for our review. All papers in our sample were published in the past three years, showing the topicality of public governance research in the age of AI. The majority of the studies in our sample concerned qualitative research, literature reviews, and research that does not test nor extend existing theories.
In our qualitative analysis, we identified potential benefits of AI use in government in nine categories: 1) efficiency and performance benefits, 2) risk identification and monitoring benefits, 3) economic benefits, 4) data and information processing benefits, 5) service benefits, 6) benefits for society at large, 7) decision-making benefits, 8) engagement and interaction benefits, and 9) sustainability benefits. Challenges of AI use in government were identified in eight categories: 1) data challenges, 2) organizational and managerial challenges, 3) skills challenges, 4) interpretation challenges, 5) ethical and legitimacy challenges, 6) political, legal, and policy challenges, 7) social and societal challenges, and 8) economic challenges.
Most of the examined studies apply a broad and inclusive use of the term AI. They do not refer to governance specifically, although they refer to a type of governance that suits the comprehensive and inclusive definition of governance used in this study. This broad governance definition includes collaborative governance; organizational governance; service governance; governance through policy, strategy, processes, and measures; and ethical governance. We want to emphasize that considering the limited number of articles in our systematic literature review, our findings should be interpreted with caution and that the field is rapidly changing.
Regarding the contributions of the articles included in this special issue, their focus was on the utilization of AI to automate the identification and classification of open data portals, on the enhancement of AI to the dynamism of the public policy cycle, and on user experience with government service provided by AI-based self-service technology. These articles collectively offer opportunities to advance our knowledge about AI use in government and its public governance implications. They are reviewed and presented here as recent examples of how scholarly efforts in the field of AI and public governance are taking shape.
Based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda concerning the implications of AI use for public governance. The research agenda contains both process-related and content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public-sector-focused, methodologically diverse, empirical, multidisciplinary, and explanatory research and focus more on specific forms of AI rather than AI in general. It also recommends that researchers in the area of the implications of AI use for public governance learn from similar pathways followed by digital government (or e-government) scholarship at its early phases. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Furthermore, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector.
The search criteria we used in our literature review intentionally excluded technical journals in computer science and technical AI applications. This is based on the realization that articles in these journals focus on the technological problems and on solutions from a highly technical point of view, rather than on the implications of AI usage for government and public governance. We invite researchers in more technical fields, primarily in areas of computer science and technical applications of AI, to utilize our search criteria in their fields and explore and compare the status of research in these fields vis-a-vis public governance.
In addition, the approach that we used in this study may appear to have excluded particular studies that might have been useful in addressing the topic of the implications of the use of AI for public governance. For example, our search terms excluded terms like expert systems, rule-based systems, chatbots, agent-based systems, and algorithms. However, since our study focuses on articles from the period 2010 to 2020, we argue that articles specifically studying expert systems, chatbots, algorithms, and other AI-related terms did actually appear in our search for the term 'AI', or one of its included derivatives. If there are any articles on chatbots, expert systems, and other AI-related systems that do not mention "artificial intelligence," then those would not have appeared in our search. Still, most likely, they would have belonged to an era when AI terms were not commonly used for articles covering public administration or governance, which is outside the period we investigated.
Today, a large portion of the research, debates, and influence towards AI's progress across the governance ecosystem is documented in practitioner and policy documents. Policy and practice-oriented documents have been excluded intentionally in this study as we wanted to primarily explore the scholarly sources concerning AI and governance. Since the publication of journal and conference articles may lag behind the most recent developments in the implications of the use of AI for public governance, we recommend future research to complement our search for scientific studies with a search for non-scientific literature.