Artificial Intelligence and the Political Legitimacy of Global Governance

Although the concept of “AI governance” is frequently used in the debate, it is still rather undertheorized. Often it seems to refer to the mechanisms and structures needed to avoid “bad” outcomes and achieve “good” outcomes with regard to the ethical problems artificial intelligence is thought to actualize. In this article we argue that, although this outcome-focused view captures one important aspect of “good governance,” its emphasis on effects runs the risk of overlooking important procedural aspects of good AI governance. One of the most important properties of good AI governance is political legitimacy. Starting out from the assumptions that AI governance should be seen as global in scope and that political legitimacy requires at least a democratic minimum, this article has a twofold aim: to develop a theoretical framework for theorizing the political legitimacy of global AI governance, and to demonstrate how it can be used as a compass for critially assessing the legitimacy of actual instances of global AI governance. Elaborating on a distinction between “governance by AI” and “governance of AI” in relation to different kinds of authority and different kinds of decision-making leads us to the conclusions that much of the existing global AI governance lacks important properties necessary for political legitimacy, and that political legitimacy would be negatively impacted if we handed over certain forms of decision-making to artificial intelligence systems.

"AI governance," then, could be seen as a subdomain of "AI ethics," guided by the assumption that we can collectively influence the development of AI.In the literature, "AI governance" often simply refers to the mechanisms and structures needed to avoid "bad" outcomes and achieve "good" outcomes with regard to the problems and issues already identified and formulated within AI ethics.
In this article, we argue that although this outcome-focused view captures one important aspect of what "good AI governance" requires, its emphasis on the effects of governance mechanisms runs the risk of overlooking important procedural aspects of good AI governance.One of the most important properties of good governance is political legitimacy, and we will argue that, for AI governance to be politically legitimate it matters not only what it achieves but also how it is structured.Under the assumption that such governance must be global in scope, this article has a twofold aim: (a) to develop a theoretical framework for theorizing the political legitimacy of global AI governance and (b) to demonstrate how it can be used as a compass for critically assessing the (lack of) legitimacy of actual instances of AI governance.Rather than defending a substantive first-order theory of global political legitimacy, our ambition is to spell out and defend some normative boundary conditions that any satisfactory account of the political legitimacy of AI governance must respect. 1 Our basic presumption is that, whatever else global political legitimacy requires, it must at least be minimally democratic.The main aims of the article are pursued by asking what this entails, in light of a distinction between "governance by AI" and "governance of AI" and in relation to different kinds of authority and different kinds of decision-making, either employing AI decision-making or applying decisionmaking to AI development and deployment.Drawing on insights from political theorizing around global governance more generally, we argue that AI governance must take procedural aspects, as well as outcomes, into account.Insofar as we accept that political legitimacy at least must be minimally democratic, an account of the legitimacy of global AI governance must respect that the governance of AI and governance by AI have a specific normative relationship and raise different normative demands.This suggests, among other things, that political legitimacy would be reduced if we decided to outsource certain kinds of decisions-making to AI systems, and that many of the initiatives to govern AI globally currently coming out of private, non-state actors lack political legitimacy.
The structure of the article is straightforward.In the first section, we give a brief overview of how the concept of "governance" is applied in the literature on AI governance, to illustrate the predominant outcome-focused view and the worries that it raises (section "What is 'Governance' in AI Governance?").Thereafter, we focus on the first part of our twofold aim, developing the theoretical framework constituted by some basic normative boundary conditions shaped in light of the distinction between "governance by AI" and "governance of AI" (section "The Political Legitimacy of AI Governance: A Theoretical Framework").The third section focuses on the second part of the twofold aim, applying this theoretical framework to current AI governance to illuminate how it can be used as a critical compass for assessing the (lack of) legitimacy of actual instances of AI governance (section "Assessing the Political Legitimacy of Global AI Governance").The final section concludes and addresses the ways in which the proposed approach may respond to the worries raised by the outcome-focused view of AI governance (section "Winding Up").

What is "Governance" in AI Governance?
The concept of "governance" in discussions around AI is currently a term much too broad for its own good.It is used to refer to everything from the plethora of "ethics guidelines" for AI development written by state-and non-state actors (Jobin et al., 2019), to the presence of human oversight in automated processes (AIHLEG, 2019: 16), and hypothetical international laws for preventing undesirable "race dynamics" among superpowers developing AI (Dafoe, 2018: 43-47).This imprecision is not surprising.First, there are a number of ways of defining AI, and the scope of what counts as AI governance hence depends on how AI is delineated, to begin with. 2 Second, both AI and its regulation are rapidly changing phenomena, and technological breakthroughs and suggestions for how to govern AI are constantly presented not only in academic journals, but also in blog posts, podcasts, and non-peer reviewed papers posted online.Contributors to the AI governance debate are not only academics but also nongovernmental organizations (NGOs) and tech companies with a potential interest in shaping the field in accordance with their interests.Indeed, some have suggested that the vagueness of the concept of "governance" lends it flexibility, which in turn explains why it is so popular among scholars and policymakers (Peters, 2012: 19).We will argue that there is a risk, however, that the way in which AI governance is currently conceived creates a blind spot for the distinctively political nature of the relevant questions about the goals of AI development and deployment, and what it would take for the answers to be democratically legitimate.
The simplest way to begin staking out what currently constitutes "AI governance" research, at least research relating to normative issues, is to conceive of it as a subdomain of the wider field of "AI ethics."This field, in turn, could plausibly be defined by analogy to how other fields in applied ethics are delineated: just like "bioethics" is concerned with ethical issues arising in light of advances in the life sciences, AI ethics is the field engaged in answering ethical questions that arise in light of actual and conceivable AI systems. 3 Several recent prominent handbooks and academic overviews of the ethics of AI, for instance, introduce the field as concerned with the ethical implications and use of AI applications, and define it by giving examples of issues it covers, including questions around AI-powered autonomous weapons, criminal sentencing, automated work processes, autonomous vehicles and sex robots, and what the moral status of AI agents are (Bostrom and Yudkowsky, 2014;Dubber et al., 2020;Liao, 2020;Müller, 2020).Furthermore, most would recognize that AI development is not a deterministic process, but rather a practice that can be steered toward or away from certain outcomes.Often using lofty and imprecise terms, it is commonplace to state that "society" or "humanity" can "guide" AI development toward certain goals.One major obstacle to this, it is assumed but rarely explained with analytical rigor, is how to make sure that AI systems can be made to "align" with "human values," often meaning roughly that the effects they produce should be in line with what "we" wish to see (Gabriel and Ghazavi, 2022).
We suggest that the currently most common way of understanding what "AI governance" is, follows from combining the characterization of AI ethics as constituted by the list of questions and issues that AI actualizes, with the assumption that we can collectively influence the development of AI.Hence, "AI governance" often refers to the mechanisms and structures needed to avoid "bad" outcomes and achieve "good" outcomes with regard to the listed issues, for example, autonomous vehicles, automated decisionmaking, and so forth.For instance, one influential definition states that "the field of AI governance studies how humanity can best navigate the transition to advanced AI systems, focusing on the political, economic, military, governance, and ethical dimensions" (Dafoe, 2018: 5).Similarly, two recent special issues on the topic were introduced by stating, respectively, that "Understanding and managing the risks posed by AI is crucial to realise the benefits of the technology" (Taeihagh, 2021: 143), and that "Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence" (Winfield et al., 2019: 510, cf. Gasser and Almeida (2017), Cath (2018: 2), and Jelinek et al. (2021: 141)).Relatedly, although many governance efforts are currently happening at the national or subnational level, the transnational nature of externalities created by AI technology suggests that at least some governance mechanisms will have to be global in scope.And although much of the AI development is done by a few companies, primarily in the US and China, they are large, transnational firms, employing people across the world, and implementing the technology in the global market.Since the issues and potential problems identified in AI ethics arise regardless of national boundaries, their solutions require governance mechanisms that are similarly global (cf.Cihon et al., 2020;Erdelyi and Goldsmith, 2018).
We believe this is a straightforward way of understanding AI governance, and that it is only natural that a young and interdisciplinary field of study has coalesced like this.Indeed, we find it difficult to think of more fruitful ways of defining the field.It is crucial, however, to recognize that this is a largely outcome-focused view, in the sense that (global) AI governance is defined as whatever helps us solve issues in AI ethics.On this view, the various different international AI standards, company guidelines, or codes of conduct for AI developers, policy frameworks, national laws, and international policies around AI are all forms of governance because they are potential ways of achieving "good" and avoiding "bad" outcomes to the issues AI raises.This focus on outcomes tinges the way we identify what is at stake, however, directing our attention toward certain issues and away from others.Specifically, we believe it raises three sets of worries about how AI governance research is conducted, and how global AI governance itself is set up.
First, it means that research and policy proposals around AI governance become intimately tied to prior assumptions about AI ethics, and the specific issues that receive the most attention there become the most discussed governance measures.For instance, although the issue of AI safety-understood here as the problem of making sure that a potential future superintelligent artificial general intelligence does not harm or subjugate the interests of humans-historically has received much less attention and funding than the technical AI research and development that could create such an AI system, it has arguably grown to engage a significant part of the technical AI research community, as well as capturing the attention of many philosophers. 4In this subfield, "AI governance" is often used to refer to the questions around how to treat and control a potential superintelligent AI as a subject to be governed, or perhaps as a subject that could govern us (see section "Assessing the Political Legitimacy of Global AI Governance" below, as well as Bostrom et al., 2020;Erman and Furendal, 2022).Some have suggested that this meant that comparatively less of the early AI ethical research effort went into short-term implications of existing narrow AI systems, such as how to assign responsibility for and prevent biased AI systems from reinforcing existing injustices (cf.O'Neil and Gunn, 2020).We are not claiming that either focus is better.The example simply illustrates that the research agenda for AI governance depends on how AI ethics more broadly is construed and understood.
Second, if AI governance research draws too much on the list of issues identified in AI ethics, it risks overlooking institutions that do not pursue those aims narrowly construed, as well as the vast literature on (domestic and global) "governance" in political science and other fields, which investigates and theorizes governance arrangements more broadly understood (cf.Levi-Faur, 2012).As we will discuss below, many AI applications are already covered by existing legislation and regulation, and a large number of state and non-state actors are currently in the process of staking out positions and negotiating future regulations of AI development and deployment.For instance, although the new European Union (EU) directive on AI technology is still only in the making, the existing General Data Protection Regulation (GDPR) arguably covers at least some AI applications.The outcome-focused understanding of AI governance may lead us to remain focused on imagining idealized institutional responses to our list of issues, overlooking the way that existing, actual institutions influence the ways in which AI technology is built and implemented.As more such initiatives appear, they should ideally be both subjects of and an inspiration for AI governance research. 5 A third worry is that the outcome-focused understanding of AI governance naturally directs our attention toward the effects of governance mechanisms-that is, whether they produce the consequences "we" have identified as desirable-rather than the goals pursued and the means by which to achieve them.We worry that there is a temptation to assess both hypothetical and actual governance mechanisms merely in relation to how well they solve the issues previously identified in AI ethics, and to overlook what those goals are and how they have been selected and pursued.Conceived this way, there is a risk that the question of AI governance is reduced to one of implementation and enforcement of an agenda that is already set by researchers and-often-the AI industry.Starting out from the basic presumption that whatever else political legitimacy requires, it must at least be minimally democratic, we argue below that the societal impact of AI should be subject to collective decision-making, which presupposes a democratic control of at least part of the agenda.Unless the overall aims and societal goals of AI development and employment are specified and justified in the right way, the global governance of AI will consequently fall short of a plausible account of democratic legitimacy. 6 So, while we do not believe that AI ethics and existing AI governance literature are misguided, we have argued that the way it is conceived risks leading to us systematically missing the important values at stake in how governance mechanisms for AI work, as distinct from the outcomes they produce.It is worth noting that, unlike some of the problems identified in AI ethics, such as algorithmic bias or lack of transparency in AI-assisted decision-making, the problem we have identified cannot be solved by better algorithms or more data, or by mechanisms that induce more ethical reflection or more diversity among those who develop AI technology (Cf.O'Neil and Gunn (2020)).Our point is that the very identification of the problems and potential of AI, the ranking of these, and the process of weighing them against other values, are all prior tasks that require politics, and should be subsumed under a properly conceptualized notion of political legitimacy.

The Political Legitimacy of AI Governance: A Theoretical Framework
While the prevailing outcome-oriented view of AI governance stresses one important aspect of "good governance," since a focus on solving specific issues in AI ethics to avoid bad outcomes and achieving good ones is crucial for improving the whole global governance structure of AI, measuring good governance by the effects of governance mechanisms runs the risk of overlooking important procedural aspects of good AI governance.One of the most important properties of good governance from the perspective of normative political theory is political legitimacy.In this section, we develop a theoretical framework for analyzing the political legitimacy of actual and hypothetical global AI governance institutions.
Indeed, since political legitimacy is a heavily disputed notion in both the empirical and normative literature-alluding to everything from effectiveness to justice-the presumption that global governance must at least be minimally democratic to be legitimate is controversial.However, we assume for the sake of argument that it is sufficiently undisputed in the normative theoretical literature, at least among many political philosophers. 7 As stated in the introduction, our aim is not to defend a substantive first-order theory of global political legitimacy, but rather to spell out and defend some normative boundary conditions that any satisfactory account of the political legitimacy of AI governance must respect.Specifically, we will elaborate on the distinction between "governance of AI" and "governance by AI" in relation to different kinds of authority and different kinds of decision-making, and argue that a satisfactory account of the legitimacy of global AI governance must respect that governance of AI and governance by AI have a specific normative relationship and raise different normative demands.Among other things, it will be argued that governance by AI cannot be democratic by itself, since it needs delegation from governance of AI, which in turn must be authorized through a democratic process.With regard to the notion of "minimally democratic," this is supposed to be interpreted in gradual (more or less legitimate) rather than binary (threshold for legitimacy) terms.Whether a threshold should be defended is the task for a substantial theory to answer.It refers instead to the idea that we are limited to specifying certain normative boundary conditions for global political legitimacy of AI, to which a substantial account may add further conditions.The argument is thus made on a metanormative and metatheoretical level, which entails a higher level of abstraction and a higher level of generality.Whether a substantial theory will defend a threshold notion and specify where that is located, will depend on whether the theory aims at being ideal or non-ideal in character, and on what feasibility constraints are presumed, and so on-questions we do not take a stand on here.

Global Political Legitimacy: Conceptual and Normative Assumptions
As noted in the previous section, a large part of AI governance is inherently global in scope and it is indeed a subset of global governance more generally.This suggests that we must apply a conceptual apparatus that is fitting for a global context to address the question of political legitimacy.For this purpose, the existing literature on AI governance would benefit from importing insights from the research on global governance conducted in political science and political theory.This section hence elaborates some general conceptual and normative assumptions made in the article about how political legitimacy and democracy must be understood in a global, as opposed to domestic, context.
Today, it is widely acknowledged among political scientists and normative theorists that political legitimacy is a desirable quality of global governance institutions more generally, and not only those regulating AI.Hence, even though there is little agreement on the specific regulative content of principles of political legitimacy, there is consensus on the value of political legitimacy in global governance.To theorize the political legitimacy of global governance, however, means to identify certain features of global political legitimacy which distinguishes it from political legitimacy as it has been applied traditionally.In the philosophical literature, political legitimacy is typically described as a virtue of political arrangements and the rules (laws) that are made within them.It refers to the justification of coercive power, political power or political authority, and usually signify the right to rule, and on most accounts also entail political obligations (Buchanan, 2002(Buchanan, , 2004;;Christiano, 1996;Wellman, 1996).Moreover, it is commonly assumed that principles of political legitimacy are supposed to regulate the relationship between what we may broadly call "rule-takers" and "rule-makers," that is, between political entities-agents and institutions-that make, apply and enforce rules and the subjects to whom these rules apply (Buchanan, 2002;Erman, 2020;Buchanan and Keohane, 2006;Valentini, 2012).
This concept of political legitimacy has been developed for a domestic context, however, and is therefore not fully fitting for the global domain and for the specific circumstances of global politics, such as the emergence of new forms of rule, new actors, and the exercise of unchecked powers (Bohman, 2004).To make the concept useful for normative purposes in the global domain, and allow it to successfully classify objects in that realm, two modifications are called for, which are acknowledged by key theorists in the literature on global political legitimacy and which still preserves the core meaning of political legitimacy as it has traditionally been understood.
The first important amendment, suggested by Allen Buchanan, is to abandon the strong idea of "the right to rule" presupposed by the traditional understanding-presuming an exclusive right to use coercive force-which is ill-fitting for global politics, since no agent or institution in global governance rule or claim to rule coercively in this robust way.There is, in other words, no global entity that could, for instance, force AI developers to focus on certain applications or avoid others.Instead, a distinguishing property of global political legitimacy seems to be a weaker coercive element, where "being morally justified in exercising political power" implies issuing rules and ascribing benefits and costs for compliance or noninterference with the efforts to govern (Buchanan, 2010: 82-84).
The second modification concerns the commonsensical understanding of "the right to rule" and "rightful authority," which is typically defined in terms of law and lawmaking and discussed in legal terms, such as in terms of the capacity to impose legal duties (Buchanan and Keohane, 2006;Christiano, 2013).Since most global governance arrangements and institutions are not lawmakers, they could exercise political power without being a proper object of global political legitimacy on this reading, and a satisfying account must recognize this.Rule-making in a global context hence ought to include not only lawmaking but also policymaking and other kinds of decision-making on political matters (i.e.matters of public concern).As we will return to below, this captures the way that certain entities currently exert influence over AI development.
In sum, on the proposed conceptual framework, the function of principles of global political legitimacy is to regulate the relationship between political entities and those over whom they exercise political power to determine under what conditions they have the right to make rules, that is, to make political decisions regarding laws and policies, and where benefits and costs are attached to, for example, compliance and noncompliance, noninterference and interference (Erman, 2020).
Moving from conceptual to normative presumptions, we assume in this article that global governance arrangements around AI, but also more generally, must at least be minimally democratic in order to fulfill the requirements of global political legitimacy.It is possible, of course, to claim that the promise or threat of AI is so great that it is justified to give up democratic control over AI development in favor of rule by purported experts.We will not engage with this view, and this is not the place to defend our assumption about political legitimacy requiring a minimum of democracy.We believe that the latter is a sufficiently shared view among political philosophers, harmonizing with a wide range of theories of political legitimacy, since it is only expressed here as one condition of legitimacy, not necessarily a sufficient one.This means that it is compatible with accounts requiring that further conditions are met, such as in terms of efficiency and effectiveness, distributive justice, or the like. 8 Moreover, since we do not pursue a substantive argument offering a specific theory of the political legitimacy of global AI governance, but rather aim to defend certain normative conditions that should be met when theorizing such a theory, it is important to adopt a broad definition of democracy, to make it compatible with all main conceptions in democratic theory, ranging from voting-centered views to views focusing on deliberation and civic engagement.Democracy is here seen as a normative ideal rather than simply a decision method, which means that the principles offered as a minimal democratic threshold for global political legitimacy are seen as part of such an ideal.To treat democracy as a decision method is rather uninteresting in the present context, since that would entail that the answer to what a democratic threshold would be is determined solely by the normative ideal that motivated the choice of democracy (as method) to begin with.Instead, our broad understanding of democracy as an ideal alludes to "the rule by the people," which is seen as a particular form of political self-determination or self-rule.On this view, a political entity (e.g.system, polity or institution) is democratic if, and only if, those who are affected by its decisions have an opportunity to participate in their making as equals (Erman, 2020). 9 With these general conceptual and normative assumptions on the table concerning global political legitimacy, let us move over to the specific task of spelling out a number normative boundary conditions that a satisfactory account of the political legitimacy of AI governance must respect.

Governance by AI and Governance of AI
In the literature on AI governance, focus is typically directed either at "governance by AI" or at "governance of AI."As mentioned in the introduction, our task to spell out and defend some basic normative conditions that any satisfactory account of political legitimacy should respect, is pursued through an elaboration of this distinction.Although the distinction is rarely explicitly noted, we suggest that "governance by AI" is a suitable term to describe the phenomenon of existing governance structures adopting AI technologies as part of their governance mechanisms, such as when public authorities adopt AI-based automated decision-making."Governance of AI," on the contrary, is a suitable term for describing the governance structures that regulate and steer AI development and deployment itself, such as the current move from the European Commission to draft legislation banning certain kinds of AI applications.
A concern about governance by AI is a concern about a subset of the issues identified in AI ethics, namely those AI-related phenomena that have to do with political rather than merely moral matters, for instance, regarding the responsibility for the exercise of public authority.There is a clear tendency in institutions engaging in public decision-making, for instance, to partially or completely automate decision-making, or rely on AI technology in other ways (van Noordt and Misuraca, 2022).The key political-philosophical issue raised by this is how the special characteristics of AI technology affects existing governance structures or enable new ones.Many worry, for instance, that the opacity of Machine Learning algorithms undermines the distinctively public character of decision-making that is necessary for democratic legitimacy, or that AI-based facial recognition could allow states to set up massive surveillance systems without the authority of the public (Liao, 2020: 17;Powers and Ganascia, 2020: 38).
Governance of AI refers, we suggest, to the actual or possible governing of the policy area of AI development and deployment.The issue here is not how AI technology affects existing governance, but rather what types of governance mechanisms and what actual policies would steer the development and deployment of AI technology in the direction deemed desirable, and the characteristics of these processes.For instance, Yeung et al. (2020) have offered a human rights-based framework for the design, implementation, and legal oversight of AI technology, and Jelinek et al. (2021) have recently suggested that the G20 should create a "coordinating committee" to deal with risks created by AI technology. 10 The distinction between governance by AI and governance of AI is rarely mentioned in the literature, and above all, it is to our knowledge not problematized from the viewpoint of AI ethics. 11This is unfortunate, since it risks hiding the fact that the debate around AI governance incorporates two sets of largely different phenomena, whose relationship ought to be theorized.The normative considerations regarding global political legitimacy that we have just presented suggest that governance by and governance of AI are, in fact, related.Assessing the political legitimacy of existing and conceivable AI governance will hence require particular attention being paid to what is governed and how it is governed.For the latter purpose, we argue that the distinction has important normative implications from the standpoint of democracy and thus for global political legitimacy.

Normative Boundary Conditions for the Legitimacy of Global AI Governance
Large parts of the current global governance of AI consists of numerous standards, guidelines, policy frameworks.laws, as well as international policies, and political entities rely on governance by AI in more and more areas.In order for global AI governance to be legitimate, we will suggest, it needs to follow the principles that regulate the relationship between the political entities involved and those over whom they exercise political power.These principles determine under what conditions the governing entities have the right to make political decisions.Recall that according to the general understanding of democracy assumed here, the political entities (agents and institutions) making up the regulatory AI structure are democratic if, and only if, those who are affected by the decisions have an opportunity to participate in their making as equals. 12Given that we assume that global political legitimacy requires a democratic minimum, the democratic quality of this regulatory structure will depend on (a) what kinds of authority lend it support and on what grounds, and (b) what kinds of decision-making that are conducted.This section develops this argument in order to support the wider claim that an account of the legitimacy of global AI governance must respect that governance of AI and governance by AI have a specific normative relationship and raise different normative demands.
Broadly speaking, there are two kinds of authority involved in AI governance, which have fundamentally different normative status from a democratic standpoint: what we here call "authorized entities" and "mandated entities," respectively.What we mean by authorized entities are precisely those entities which are approved directly through such a democratic procedure and thus have the right to make political decisions ("right to rule").Examples of authorized entities are nation-state parliaments and the EU parliament.Three clarificatory remarks are in order.First, we aim to capture in more abstract terms the normative status of those entities without any necessary ties to the current statist framework, since this opens up more space for future institutional arrangements in global governance with other properties and forms (Erman and Furendal, 2022).Second, we stay neutral with regard to which kinds of agents have the opportunity to participate as equals in the procedure and thus authorize an entity, since the normative boundary conditions we spell out here are meant to capture in more general terms what a theory of global political legitimacy should respect.Finally, democratic authorization could take numerous different forms, depending on which democratic model is favored.The relevant agents could, for instance, be democratic individuals, which is typical in the domestic context, or democratic states, which is typical in many global settings.The conditions also stay neutral vis-a-vis different formal decision rules because, while we are used to individual-majoritarian rule in domestic contexts, a variety of voting rules are applied in global governance arrangements (majoritarian, weighted, unanimity, and so on).For example, while the World Bank applies a weighted procedure, the World Trade Organization (WTO) exhibits majoritarian "one-state, one-vote" decision rules.
Apart from authorized entities, what we mean by mandated entities are those entities that have been delegated political power by authorized entities, typical examples being executive bodies and administrative agents and institutions (Erman, 2020).Democratic delegation from authorized to mandated entities could take numerous forms, depending on the model of democracy we are assuming.It could also happen for a variety of reasons, including when, and because, delegation allows for higher-quality, decentralized, or more far-reaching governance, or because it is dictated by a constitution aimed at balancing powers.As we will explicate below, when authorized entities maintain a possibility of revoking the delegated power from the mandated entities, both constitute parts of a "legitimacy chain" that must not be broken.
These two different kinds of authority are in turn tied to different kinds of decisionmaking, from the standpoint of democracy.To explain why, we must return to the idea of democracy as an ideal of political self-determination or self-rule.The normative core of this ideal is the idea that a person is only obliged to comply with the rules she has had the opportunity to authorize.Thus, democracy as a form of political self-determination has to do with political "authorship," according to which an agent may only be rightfully coerced by decision-making insofar as she has taken part in its authorization.This means that there is a fundamental difference between coercive decision-making and non-coercive decision-making from a democratic standpoint.In practice, coercive decision-making usually comes in the form of lawmaking, whereas non-coercive decision-making typically comes in the form of policymaking. 13Here we use "coercion" broadly, to make our argument neutral vis-a-vis specific theories of democracy and compatible with the permissive understanding of global political legitimacy sketched earlier.It refers not only to a physical aspect, which is often stressed in the literature-involving force, sanctions, or threat of disciplinary action-but also an authoritative aspect, which has to do with being subjected to authoritative commands.In both cases, benefits and costs are usually tied to compliance and noncompliance.
Typically, laws are more fundamental and formal than policies, constituting a system of rules that sets out principles, procedures, and standards that proscribe, mandate, or permit certain relationships between people and institutions.Policies usually consist of statements setting out certain procedural and substantive goals of what should be accomplished in the near or remote future.Importantly, however, policies comply with laws and are formulated within a legal framework, even if they may aim to fundamentally change an existing law or identify a new law that is needed (Erman, 2020;Erman and Furendal, 2022).In our view, these two kinds of decision-making (coercive and non-coercive) are connected to the two kinds of authorities discussed above in the following way: it is authorized entities who are the rightful lawmakers (or more accurately, makers of coercive decisions) in the AI space.They may then delegate authority to mandated entities, which become rightful policy-makers (or more accurately, makers of all kinds of noncoercive decisions) in the AI space through this delegation.
This means that a satisfactory account of the political legitimacy of AI governance should respect that there is a specific normative relationship between governance of AI and governance by AI from a democratic point of view, and that the two therefore raise different normative demands for global political legitimacy.Recall that governance of AI refers to the existing and conceivable governance structures that regulate and steer the policy domain of AI development and deployment.Governance by AI, by contrast, describes how existing governance structures adopt AI technology in the practice of governing.Since mandated entities gain their legitimacy through delegation from authorized entities, there is a justificatory hierarchy where the latter have supremacy over the former.This, in turn, creates a specific normative relationship between the two, and raises different demands in order for each to be legitimate.The same justificatory hierarchy exists between the governance of AI and governance by AI, where the latter is normatively "parasitical" on the former.Let us explain.
Authorized entities in the AI space, would be those entities which have the right to rule because they have been established through a democratic procedure in which those affected by AI (in one form or the other) have had an opportunity to participate as equals in shaping the "control of the agenda" concerning AI, generally seen as a fundamental property of democracy (Dahl, 1989).This is why authorized entities necessarily must engage in governance of AI (rather than governance by AI), since they set up the legal structure for the overall aims and societal goals of AI development and employment as well as the basic form of the main institutions of the AI space, through coercive decisionmaking.These aims and goals cannot, we will suggest below, be deliberated and legitimately decided upon by an AI system.Mandated entities, on the other hand, are delegated authority to make non-coercive decisions (policies, etc.) applying to AI as a policy area or more generally, and may do so either with or without the use of AI.So, there is no principled reason why mandated agencies cannot legitimately engage in governance by AI, such as AI-assisted decision-making in government agencies, in relation to a predetermined issue or set of issues in a policy area, within an already established legal framework. 14Decisions around the overall aims and societal goals of AI development, however, can only legitimately be done by authorized entities, not AI systems themselves.
It also follows from the normative conditions defended here that authorized and mandated entities can create a legitimacy chain between those affected and the entities that create coercive law and non-coercive policy.This requires, however, that each authorization and delegation fulfills a requirement of accountability.Precisely what this requirement entails and how demanding it is depends on what democratic model is favored.What is argued here, however, is that the normative boundary conditions for global political legitimacy at a minimum require that a property of accountability is constituted by a right to revocation, that is, a right to withdraw authority/political power.In order to secure that the right to revocation can be properly exercised, there is also a need for a degree of transparency with regard to the decision-making process.For without transparency, an authorized or mandated entity could not know whether it needs to withdraw power or not.Moreover, with regard to those affected by the decisions, a normative boundary condition is that they have a right to justification (Forst, 2011), which minimally requires publicity in the decision-making, that is, that they can be offered reasons in the individual case for the decisions made.This entails that if the last mandated entity is an AI, which typically cannot offer reasons for the decision in individual cases, governance by AI must again be assisted by governance of AI through some delegated public official or civil servant who can explain why the decision was X rather than Y in that particular case. 15 If we depict these normative boundary conditions as a (circular) chain of legitimacy (Figure 1), an authorized entity owes accountability to those who authorized it (those affected, according to some criterion of inclusion).If the people are dissatisfied (according to some standard of accountability), they may withdraw the political power.Similarly, every mandated entity owes accountability to the authorized entity which lent it authority through delegation.If the mandated entity does not live up to the appropriate accountability standard (which, again, depends on the democratic theory), its authority may be withdrawn.This is the case for every step of delegation in a legitimacy chain.For not only authorized entities may delegate authority to mandated entities; also mandated entities are allowed to delegate political power in a further step to an additional mandated (often very specialized) entity, as long as the first step in the chain of delegations is from an authorized entity.However, the right to revocation is tied to every delegation in the chain: an entity delegating authority may always revoke that authority if it does not fulfill the demands of accountability.So, for example, if a mandated entity erodes the democratic oversight through lack of transparency, it breaks the chain of legitimacy by not living up to the demands of accountability.In such a case, the chain can only be repaired by invoking the right to revocation so the delegated power is withdrawn.
Winding up, in this section we have conducted rather abstract and broad-strokes theorizing to address the first (and main) part of our twofold aim.We believe that this is called for, since large parts of the debate on AI governance have been rather narrowly focused on specific instances of AI governance and have failed to recognize the normative significance of the distinction between governance of and governance by AI, and the outcomefocused contributions to the literature rarely ask what conditions are needed for governance arrangements to be politically legitimate.Our suggested theoretical framework raises the gaze to capture on a general level some normative boundary conditions that we think a satisfactory account of the political legitimacy of global AI governance should respect.Needless to say, what it means to respect these conditions will depend heavily on what a specific substantial theory of legitimate AI governance aims to achieve, and what its principles are meant to regulate.For example, if the intention is to offer a non-ideal theory, "respecting the normative conditions" would presumably be interpreted in less demanding terms-with principles tied to several feasibility constraints-since such a theory aims to be fully realizable.If the intention is to offer an ideal or even ideal and comprehensive theory, on the contrary, not only would the conditions presumably be respected in a strict sense, without much concern for feasibility apart from the proviso "ought implies can," but additional requirements would probably be added, concerning for instance distributive justice.

Assessing the Political Legitimacy of Global AI Governance
With this theoretical framework on the table, let us move to the second part of our twofold aim and apply the framework on current AI governance to illustrate how it may work as a compass for assessing the political legitimacy of instances of global AI governance.Since we have laid out some normative boundary conditions that a reasonable account of political legitimacy should respect, rather than developed a substantial theory, the framework cannot be used to fully evaluate all existing varieties of global AI governance, nor can it be used to specify in detail what the most politically legitimate AI governance regime would be like.Both tasks would require us to commit to specific democratic theories and their assumptions.Nevertheless, the more minimalist framework we have offered can be used as a compass to assess particular governance arrangements and indicate whether they are more or less legitimate.In light of what we have argued above, we can conceive of five possible kinds of politically legitimate AI governance in the global domain, and one common form of governance that seems to lack legitimacy.
First, regarding governance of AI, there could be authorized entities in the AI space that decide (a) on general laws that are also applicable to AI.An example of this is the EU's GDPR.Although the EU designed this piece of legislation to protect EU citizens' data in general, it will naturally have bearing also on AI technologies that were not created when the regulation was conceived, since data are key to developing and refining AI systems, and the handling of data is regulated by the GDPR.Other EU legislation also fall into this category.Consider, for instance, the existing EU directive concerning consumer rights and product liability.Although it is not an AI-focused piece of legislation, it does apply to all products sold on the EU market and evidently also to AI systems implemented there, and it is coercively upheld by threat of fines.
Second, there could be authorized entities that decide (b) on specific laws applicable only to AI.A current example of this is the EU's push to draft the so-called "AI act," which is indeed designed specifically for emerging AI technologies.This regulation is currently going through the EU's lawmaking process, and will feature different coercively imposed rules for different AI systems, depending on how much risk they are estimated to entail.While it seems like some high-risk applications, most notably facial recognition software, will be banned in the EU, AI applications expected to have little or negligible risks will be only lightly regulated.Compliance will be enforced by fining companies breaking the rules (European Commission, 2021).
In addition, politically legitimate global governance of AI could also take the form of mandated entities that decide (c) on general policies and policy frameworks that are also applicable to AI. Consider, for example, the UN Security Council, which recently debated emerging technologies, including AI, and expressed hopes that such technologies will make the entity's peacekeeping missions more effective (U.N. Security Council, 2021).Even if the Security Council's lack of an internal democratic structure means it is a less clear-cut example than the EU, it gives a sense of how the suggested theoretical framework can be applied to current global governance structures to identify different kinds of authority and different kinds of decision-making in relation to political legitimacy.
We could also conceive of mandated entities that decide (d) on specific policies and policy frameworks applicable only to AI. Examples include the specific AI policies and frameworks that have been presented by numerous international organizations in recent years.For instance, NATO presented an AI strategy in 2021, which could become an important standard-setting document for the adoption of autonomous weapons.The organization also has significant impact on what kinds of AI applications that are developed, since it awards defense contracts to the AI industry to spur innovations in military applications of AI.NATO recently created a 1-billion-dollar investment fund for this purpose (VentureBeat, 2021).Hence, NATO is a mandated entity (being delegated power by its member states) that arguably has significant impact on AI as a policy area.
Finally, there could be (e) mandated entities that engage in governance by AI.Examples of this are so far easier to find at a domestic level: assessments from AI-driven software is now used to predict where crime will take place, and thus form the basis for how various law-enforcement agencies distribute resources between different geographic areas.And many other government agencies rely on machine learning algorithms to decide whether particular individuals qualify for certain resources and schemes. 16At the global level, it is likely that research based on or enabled by AI technology form the basis for similar decisions regarding how to distribute resources or prioritize issues, but difficult to find clear instances of this so far.We could imagine, for instance, the European Central Bank or the International Monetary Fund relying on research enabled by machine learning in order to decide on policy or strategy, or even automating some of their decisionmaking around monetary policy to algorithms.The normative considerations we have outlined in this article do not give us the final word on whether this would be politically legitimate, since this may depend on many further considerations, for example, more demanding aspects of transparency and accountability not captured by our theoretical framework.What our framework can say, however, is that since there is at least a formal delegation of power from authorized entities to mandated entities like these organizations, the condition of being minimally democratic seems fulfilled (insofar as the right to revocation and the right to justification also are substantiated).
Importantly, two interesting things follow from the framework presented.First, as we mentioned above, our view suggests that authorized entities cannot legitimately engage in governance by AI.That is, we assume that an AI system, on its own, cannot legitimately govern humans, or be the ultimate normative source of politically legitimate coercive decisions, in the sense that more traditional authorized entities can.Human decision-makers in an authorized entity such as a parliament could, of course, consult AI-based technology in order to gain information necessary to make better decisions.We suggest, however, that an AI system could not gain the status as a decision-making member of parliament itself, and that we cannot get rid of parliaments and shift to some kind of pure algorithmic decision-making, without it negatively impacting political legitimacy.Such proposals are only defensible if we view democracy merely as a decision method, and assume that AI systems would generally make "better" decisions than human lawmakers. 17As stated above, however, we view democracy as an ideal of selfdetermination, according to which those who are supposed to comply with the rules have had the opportunity to authorize them by participating in their making as equals.Given this, our framework suggests that fully outsourcing decisions to AI systems would necessarily entail a loss of self-determination, that is, a loss of democratic control.We are unable, however, to devote space for the complex questions that would need to be resolved in a full analysis of this idea-including issues such as whether the AI is superintelligent, how it captures voters' preferences, and whether it is possible to meaningfully draw a line between relying on the AI system when making decisions and delegating decisions to it. 18 Second, and equally important, many of the entities that are engaged in the global AI governance currently taking form, do not fit into either of the five categories. 19One of the most prominent kind of actor involved in current global AI governance, for instance, are neither mandated nor authorized, but non-state, private actors.Many AI-developing companies and interest groups have recently authored various "soft law" documents, including ethics guidelines, standards, and codes of conducts set to guide their continued work in the sphere.Examples of entities that have presented or committed to such initiatives include industry organizations like the Institute of Electrical and Electronics Engineers (2021), tech giants like IBM (2021), AI labs like DeepMind, as well as individual AI researchers and developers (Future of Life Institute, 2018).In one sense, these documents could be seen as committed to core democratic values, since they are a form of self-regulation, where relevant stakeholders voluntarily commit to an agreement that guides their future actions in the AI sphere.Hence, they could arguably be seen as an instance of "corporate social responsibility" (CSR); initiatives that aim to promote societal and humanitarian goals by supporting ethically oriented business practices.Furthermore, one could argue that if we relaxed our strict understanding of authorization, people have indeed exercised influence-in their role as consumers in a market-and in some sense authorized this kind of governance.Such initiatives could hence be considered politically legitimate, both because they ultimately promote the good outcomes that many of those interested in AI governance care about, and because on this wider interpretation of authorization, people have indeed had an indirect say in them.
In response, we recognize that initiatives from non-mandated and non-authorized entities may be part of good governance more broadly.Although one might worry that corporations embrace the soft law approach in order to delay initiatives to instantiate hard regulations, the effectiveness of either kind of governance is ultimately an empirical question on which we stay neutral here.However, insofar as the legitimacy chain depicted above is valid, we insist that if these initiatives would be the only kind of global governance of AI, there would clearly be a legitimacy deficit.Applying our theoretical framework suggests that these initiatives have major normative limitations since without delegation, there is no accountability with a right to revocation, that is, a right to withdraw authority.Whatever else these initiatives achieve in terms of good outcomes, they nevertheless lack political legitimacy, since non-mandated private actors only "self-withdraw" political power, as it were, which means that they are not part of a legitimacy chain.Yet, the potentially very large impact of AI technology suggests that all those affected by AI technology, and not only those developing it, ought to have a say about its trajectory.Our framework does not necessarily propose that this should happen by including more people in the formulation of ethics guidelines, but rather that there must also be other governance efforts, initiated by politically legitimate mandated and authorized entities whose legitimacy chains extend sufficiently to cover all those who ought to have a say.In sum, the fact that our framework does not recognize nonauthorized and non-mandated entities as politically legitimate parts of global AI governance is a feature and not a bug: it suggests that many of the largest actors in current AI governance, and the policies they produce, lack key aspects of political legitimacy as we understand it.

Winding Up
We started this article by suggesting that "AI governance" typically refers to the structures and mechanisms needed to avoid bad outcomes and achieve good outcomes with regard to an already identified set of AI-related issues.We then raised a number of concerns with this predominant outcome-focused view of AI governance.Above all, we expressed the concern that a focus on the effects of governance mechanisms runs the risk of overlooking important procedural aspects of good AI governance.Against the backdrop of this lacuna in current research-together with the two presumptions that political legitimacy is one of the most important procedural properties of good governance, and that political legitimacy at least must be minimally democratic-we have developed a theoretical framework consisting of a number of normative boundary considerations that any reasonable account of the global political legitimacy of AI governance should respect.This was done by elaborating on the distinction between "governance by AI" and "governance of AI" in relation to different kinds of authority and different kinds of decision-making.Finally, we demonstrated how this framework could be used as a critical compass for assessing the legitimacy of actual instances of global AI governance.Among other things, we pointed out that both the idea of outsourcing democratic decision-making to AI systems, and the soft law documents and ethics guidelines produced by non-mandated private actors in AI governance have deficiencies with regard to political legitimacy.
Recall the three worries we initially identified when considering the outcome-focused character of current research on AI governance.The first two worries-which stressed that AI governance research may become too focused on the issues that receive the attention of AI ethicists, and overlook existing regulations and institutions-seem readily fixable by simply widening the field and welcoming alternative research strategies.AI ethics research will clearly continue to play a crucial role in understanding what is at stake when AI is being developed.The third worry, however, was that AI governance risks being reduced to the question of how to implement and enforce an already fixed agenda.We have tried to address this worry by offering normative considerations that enable a more profound and systematic understanding of global AI governance by problematizing the very assumptions upon which an outcome-focused view relies.The outcome-focused view investigates the best solutions to a list of already identified issues that AI actualizes.The values of political legitimacy and democracy, however, raise more fundamental questions about how this list was identified and agreed upon in the first place, who was involved in deciding upon them, and through what procedures.Moreover, they raise questions about the overall aims and societal goals of AI development and employment, which are typically already implicitly presupposed when deciding on the mechanisms and structures needed to achieve good outcomes and avoid bad outcomes in relation to specific AI-related issues.
Of course, our approach does not in any way undermine the outcome-focused view of AI governance.Rather, it complements it by directing our focus elsewhere, on the procedural aspects that also should be taken into consideration when theorizing good AI governance.At best, it offers a more comprehensive analysis of what good AI governance entails in the global domain.Obviously, this has to be investigated in much more detail, but we have a sense that also the more narrow outcome-focused view would benefit from being incorporated in to a broader approach to AI governance, such as the one sketched here.
approach that we describe below, the described relationship could be said to be the opposite.AI ethics, in the sense of the adoption of ethics guidelines or codes of conduct seeking to shape stakeholders' decisions could be seen as one kind of AI governance.It is our procedural focus that leads us to the inverse analysis.4. Thanks, in part, to intellectually stimulating contributions such as Bostrom (2014), and research infrastructure built around the issues thus conceived, as well as the fact that new fields of study will be populated through clusters around existing research, that is, building on the important early contributions. 5. Apart from contributions about (abstract) governance models and hypothetical mechanisms-such as Yeung et al. (2020), Jelinek et al. (2021), andWirtz et al. (2020)-there is also research on existing governance mechanisms, although it is still often quite general and descriptive.See, for instance, Butcher and Beridze (2019).The recent survey by Schmitt (2022) suggests, however, that actual governance initiatives are often subsumed within existing governance structures.6.For instance, whether a particular system for governing AI is legitimate cannot be settled simply by asking whether people perceive it to be legitimate, or seeing if it produces "good" outcomes, but will depend on a range of political-philosophical questions.7.In political philosophy more broadly, there are of course exceptions, for example, historical (state of nature) views like Lockean accounts grounded in tacit consent or "pre-political"/moral views like utilitarian accounts grounded beneficial consequences (Peter, 2017).And in empirically oriented political theory, there are many proposals of a more non-ideal theoretical nature which theorize global political legitimacy in relation to other central notions, such as accountability and transparency, with no necessary connection to democracy (e.g.Scherz and Zysset, 2020).8. Furthermore, whether (and to what extent) this "skeleton" notion of political legitimacy is equated with "democratic legitimacy" depends on which substantive theory is defended (for some theorists, we must add, for example, deliberative qualities and a well-functioning public sphere to have democratic legitimacy).Here we only discuss the normative boundary conditions we believe that any plausible account of the political legitimacy of AI governance should satisfy, in one way or the other, given that we assume that political legitimacy at least requires a minimum of democracy.9. Needless to say, what constitutes the most plausible criterion of inclusion for spelling out what "being affected" entails is heavily debated in the literature on the boundary problem of democracy.But whether we argue for, for example, the "all-affected interests principle" or the "all subjected principle" depends on which substantial democratic theory we defend, to which we stay neutral here.10. Overviews of existing governance structures can be found in, for instance, Jobin et al. (2019), and Butcher and Beridze (2019).11.This is not to say that there is a lack of research into governance by and of AI, respectively, Other concepts are sometimes used to describe a similar distinction: Some use the term "governance by algorithms" to describe how algorithms influence the information we receive and how we construct our social order (Just and Latzer, 2017: 239).See also Katzenbach and Ulbricht (2019)."Algocracy" or what we here call governance by AI has been discussed in, for instance, Danaher (2016).A contribution of this article, however, is that we not only draw the distinction but also begin to theorize how the two aspects relate to each other normatively.12. Indeed, having "an opportunity to participate in their making as equals" may mean very different things depending on which substantial democratic theory is defended, but typically it means at least having a formalized "say" in the decision process, in the form of equal voting rights.However, many democratic theorists, such as advocates of participatory democracy and deliberative democracy, would add a number of conditions to fulfill the requirement of "opportunity to participate as equals," like having access to deliberative fora that feed into the decision process, an active civil society, and so on.In addition, having an opportunity to participate is on some accounts further tied to the requirement that a sufficient number of people in fact use this opportunity and exercise their political rights.But since we do not defend a specific substantive theory here, we leave these questions open.13.However, this is not necessarily the case.Especially in global governance, the coercive property moves across these categories, since there are laws that are non-coercive and thus look more like policies, like global administrative law.14.As we clarify below, these are not all-things-considered judgments saying that these practices are legitimate or not.Instances of AI-assisted decision-making could be deemed illegitimate for a number of other reasons, including being based on biased data, or simply for making too many mistakes.And authorized entities could rely on AI-based assistance in their decision processes.
15.Note that the proposed normative boundary conditions that a substantive account must fulfill are not divided into fundamental and derivative conditions with regard to normative status.In order to do so, we would have to offer both a substantial account and a moral grounding, which is beyond our aim.We only claim that all these conditions must be satisfactorily fulfilled (again, how and to what extent depends on the first-order theory) and there is no a priori normative hierarchy between them, even if two properties are "derivative" in the sense that the right to justification requires publicity and the right to revocation requires transparency.Thanks to a reviewer for making us clarify this.16.Many American police departments rely on predictive algorithms (Jorgensen, 2021), and the Austrian public employment agency was recently criticized for using algorithms to categorize job-seekers and direct its support efficiently (cf.Allhutter et al., 2020).17.But as stated earlier, the defense of democracy as a decision method then has to be spelled out by the normative ideal that motivated the choice of democracy as method in the first place.18.The literature on this is still sparse, but see McEvoy (2021) for a defense of AI systems governing humans.
Note that we do not claim that governance arrangements where AI systems have some authority over us are inherently illegitimate, but rather that they lose some of their legitimacy, which perhaps could be compensated for by other virtues.19.That this conclusion follows from our theoretical framework illustrates a key way in which the procedural focus of our analysis differentiates it from earlier contributions to this debate.Although insightful and influential, Floridi et al. (2018), for instance, operate within an outcome-focused approach.While they make a strong case for why AI should be steered toward having "socially preferable" outcomes (Floridi et al., 2018: 704), they do not discuss what those outcomes are or how this could be legitimately decided.
Our framework, by comparison, explains the importance of features such as accountability and a right to revocation, for the legitimacy of how we agree on desirable outcomes.

Figure 1 .
Figure 1.Normative boundary conditions as a (circular) chain of legitimacy.