Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes

Abstract In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterize China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity,” and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim.


Introduction
Artificial intelligence (AI) has unprecedented capacity to reshape individual lives, societies, and the environment . Recent advances in AI have created strong incentives for countries to develop governance strategies for maximizing the potential benefits of AI technologies whilst mitigating their risks. More than 60 countries have released AI policy documents. 1 The strategies of the European Union (EU) and China 2 are two of the most comprehensive efforts to promote and govern AI, with both indicating what they believe a "Good AI Society" should look like (Cath et al. 2018;Roberts, Cowls, Hine et al. 2021;. To date, there have been few efforts to compare these visions for a "Good AI Society" and, in doing so, to understand the granular differences in aims, processes, and outcomes. Instead, comparative literature has largely focused on comparing specific capabilities of China, the EU, and other states, using metrics such as data availability, concentration of AI talent, and patents filed (see for example : Castro, McLaughlin, and Chivot 2019;Ding 2018;Imbrie, Kania, and Laskai 2020;Lee 2018). The failure to compare the aims, processes, and outcomes of China's and the EU's approaches to AI is a notable omission, given the value that such analyses could provide to policymakers and academics trying to identify and introduce best practices across contexts. This is not to say that no literature has considered how some specific best practices for China and the EU can be adopted. A handful of articles have focused on how elements of China's AI innovation strategy could be adopted in the EU to improve global competitiveness (Arcesati et al. 2018;Candelon et al. 2020;Elliot 2020), and how specific industrial practices within Europe could improve efficiencies within China (Ding 2020b). However, these works do not analyze the strategies holistically and systematically, and give little consideration to ethical best practices and outcomes.
The task of this article is to help fill these gaps. We do this in two ways. First, we assess the key similarities and differences regarding what the high-level aims of each government's strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are intended (or claimed) to benefit. Second, we consider which, if any, governance practices could be drawn upon in each respective jurisdiction to improve ethical outcomes. We selected the EU and China as case studies because, while they are both global leaders in developing and deploying AI technologies, they have taken divergent approaches to governing AI. China has explicitly taken an innovation-first approach, which is predominantly concerned with maximizing the utility of AI (Duan 2020;Zeng 2020). That being said, promoting "common prosperity" through consumer protection measures and checking the power of large technology companies is becoming a higher priority, as will be discussed later in this article. In contrast, the EU has consistently prioritized mitigating potential ethical risks to protect fundamental rights (Brattberg, Csernatoni, and Rugova 2020).
Before moving to the body of this article, it is necessary to outline what we mean by our stated second aim for this article: to "improve ethical outcomes." Despite high-level similarities of different national AI strategies, what constitutes a "Good AI Society," and how this should be achieved, depends on cultural and political factors (Duan 2020;Floridi et al. 2018;ÓhÉigeartaigh et al. 2020). Much of the extant debate on AI governance assumes foundational values: human rights, democracy, and human dignity. However, the Western and Chinese interpretations of these concepts and their roles in the respective value systems differ (Brown 1997;Chan 1999;Ni 2014;Wong 2020). With this in mind, taking an absolutist position that granularly defines how to improve ethical outcomes would be detrimental for the aims of this article, as it risks promoting ethical values that are misaligned across cultural contexts. Equally, failing to assess policy choices on account of perceived cultural differences is morally relativistic and inadequate for checking blatantly unethical practices.
To avoid these traps, we adopt a stance of ethical pluralism that recognizes agreement might be limited to the existence and nature of fundamental values, but that their interpretation may differ depending on the context. 3 Ethical pluralism, in contrast to an absolutist position, is based on respect for and tolerance of a wide, though not boundless, space of reasonable differences and disagreements. We recognize that there may be multiple, valid interpretations of how fundamental values should be enacted in AI strategies. Nonetheless, we perceive that there are limitations to the degree to which some interpretations of these fundamental values are normatively acceptable. Inevitably, one determines these boundaries of acceptability through moral reasoning based on one's own particular value set (Angle 2002).
Finally, just as we accept the political realities and work within the cultural constraints of each jurisdiction, we must also acknowledge that by assessing "approaches to" AI, we risk the implication that its adoption, regardless of ethicality, is inevitable. It may seem that our stance does not allow for the notion that the most ethical approach to AI may be no AI at all, at least sometimes or in some contexts. On this point, we must again admit pragmatism in our approach. The evidence we will examine makes it clear that Chinese and European policymakers see AI as not only inevitable but also as geopolitically significant. For the purposes of the article, we think it is valuable to engage with policymakers on these terms. There are two benefits to doing so. First, because the policymaker is best placed to steer AI in a more ethical direction, discourse from the policymaker's stance is more likely to effect ethical action and change. Second, because shedding light on how policymakers seem to view AI may help scholars and civil society advocates contend with or oppose the adoption of AI outright. Whether AI should be adopted at all in the first place, or under what conditions, remains a crucial issue, but not one within the scope of this article.
With these clarifications in mind, our analysis and recommendations will predominantly focus on making practical, process-based improvements that are feasible within the constraints of each jurisdiction's cultural norms, political system, and the key aims outlined within the respective AI strategies. 4 The remainder of this article will be structured as follows: we first summarize what China and the EU's respective AI aims and strategies are, including high-level similarities and differences. Next we consider how the differing approaches to those high-level aims are being enacted, with a particular focus on the mechanisms that the respective governments are using to promote the development and use of AI in the public and private sectors. Thereafter we consider whom these policies are meant to benefit, comparing the EU's explicitly individual-focused model and China's implicitly societally-oriented policy documentation. Building on this foundation, we then turn to the last section where we discuss which policy measures from the EU and China could be emulated, adapted, and built upon in order to improve ethical outcomes in the other jurisdiction.

China's AI policy
In July 2017, China released its national AI strategy, entitled the New Generation Artificial Intelligence Development Plan (AIDP), which outlines the country's geopolitical, fiscal, and legal/ethical aims regarding AI technologies. The document sets three developmental milestones for 2020, 2025, and 2030 (shown in Figure 1). The AIDP suggests that AI is to become the main driving force behind China's industrial upgrading and economic transformation. At the same time, it emphasizes the importance of minimizing the risks associated with the transformation of employment structures, violations of personal privacy, and challenges to international relations norms. The central ambition of the AIDP is for China to become a world leading innovation center for AI by 2030, with an AI industry valued at 1 trillion yuan ($147 billion) along with robust, well-established laws and ethical standards for the emerging challenges of AI.
The AIDP is not the only policy document to focus on AI within China, with a handful of policies released prior to 2017 mentioning AI as a priority. In 2015, the State Council launched "Made in China 2025": a ten-year plan to transform China into a dominant player in global high-tech manufacturing. A year later, the Central Committee of the Communist Party of China (CCP) outlined its 13th five-year plan, which mentioned AI as one of the six critical areas for developing the country's emerging industries. However, AI is just one of a number of technologies mentioned within these policy documents. Hence, the AIDP is referred to within Chinese media as "year one" of China's AI strategy (Tsinghua University, 2019).
Several policy documents have also been released since the AIDP. They build upon the high-level aims outlined in the development plan. For instance, in keeping with one of the aims for 2020, ethical principles were published by the National Governance Committee for the New Generation Artificial Intelligence in June 2019. These principles are accompanied by softer measures, such as an ethics code published by the same body in September 2021, which aims to "integrate ethics into the entire lifecycle of AI." The Guidelines for Artificial Intelligence Ethical Security Risk Prevention, published by China's cybersecurity standards body in January 2021, are a similar soft measure. There are also signs that these initiatives will be accompanied by harder measures, in line with the aim to codify ethical standards into law by 2025. For instance, the Cyberspace Administration of China (CAC) recently published a regulation for recommender systems (CAC 2021a) and nine ministries jointly released a guiding opinion on algorithmic governance, where they stated that their objective was to develop a comprehensive governance ecosystem within the next three years (CAC 2021b).

EU's AI policy
In April 2018, 24 EU countries and Norway signed a Declaration of Cooperation on AI, 5 formalizing their intention to promote a collective European response to the opportunities and challenges presented by AI. In the same month, the European Commission's Communication on the European Approach to AI (hereafter "the Communication") was published. This document emphasizes that, given fierce global competition, building on its own values will enable the EU to lead the way in developing and using AI that is "good for all." Three overarching aims are outlined as priorities. The first is to boost the EU's technological and industrial capacity across the economy, in both the private and public sectors. The second is to prepare for the changes brought about by AI through anticipating market change, modernizing education and training, and adapting social protection systems. The third is to ensure that there is an appropriate legal and ethical framework that is in line with EU values.
Similar to China's AIDP, the Communication provides the core for understanding an EU approach to AI, but it was predated by a report published by the European Parliament's Committee on Legal Affairs on robotics and AI (Cath et al. 2018), and has since been complemented and clarified by a number of other policy documents. Notably, the High-Level Expert Group on Artificial Intelligence (HLEG), a group of 52 experts, 6 appointed by the European Commission in 2018 to support the EU's strategy for AI outlined above. The HLEG produced Ethics Guidelines for Trustworthy AI (2019) that were developed to ensure a "human-centric" approach to AI and 33 "Policy and Investment Recommendations" (2019) that seek to guide trustworthy AI toward sustainability, growth, and competitiveness.
The EU's approach to governing AI was substantiated in April 2021 with the release of the European Commission's proposal for an AI Act. It is a risk-based regulation for AI, which outlines a four-tiered framework based on the potential risk to health, safety, and fundamental rights: unacceptable, high-risk, limited risk, and minimal/no risk. Systems deemed to be of unacceptable risk will be prohibited, those that are high-risk will be subject to ex ante and ex post requirements, limited risk will have transparency requirements, whilst minimal risk will be encouraged to follow codes of conduct. Violating the provisions of this proposed regulation could lead to fines of up to 6% of global turnover or 30 million euros, whichever is greater.

Comparative high-level aims
On the surface, the EU's and China's strategies can be considered broadly consistent: both aim to foster global leadership in AI, improve social outcomes, and increase economic output, whilst curtailing potential harms. However, there are important distinctions to consider as to what the aims and rationales of each government are and whom these approaches aim to benefit, something that we will cover in greater depth in the section "Whom AI is meant to benefit." Despite the EU's and China's AI strategies emerging at similar times, there has been a key difference in framing that has pushed the governments on distinct trajectories. In the EU, policy discussions about AI governance were largely prompted by the widespread adoption of AI and a desire to control the potentially harmful impacts of use (Cath et al. 2018). In contrast, policy discussions in China mostly emerged after the "Sputnik Moment" of AlphaGo's 2016 and 2017 victories over champion Go players, Lee Sedol and Ke Jie, that underscored the potential of AI (Lee 2018). In this sense, China's strategy was predominantly sparked by an attempt to seize the potential that AI has to offer. In the years that followed, there was a clear emphasis in the EU on mitigating harms, while Chinese policy and discourse mostly 7 favored an innovation-first approach (Duan 2020;Zeng 2020). There are indications that this distinction is now beginning to blur. The Chinese government has begun placing an increased emphasis on "common prosperity" policy measures which seek to assert greater political control over companies and curtail harms to society, often at the expense of innovation (Sun 2022). In the EU, some commentators have argued that the AI Act is framed in terms of promoting innovation rather than protecting rights (Del Castillo 2021). While this claim is questionable, it is true that innovation-focused elements of the AI Act-such as regulatory sandboxes-have moved to the center of debates in Europe (Bertuzzi 2022).
The type of geopolitical competition that is outlined within respective AI policy documents is another point where the two approaches depart from each other. The AIDP's aims encompass the realm of science and technology, talent retention, standards and regulations, and the military benefits that AI may offer. This last point must be highlighted. It is stressed that "China must … seize the strategic initiative in the new stage of international competition in AI to … effectively protect national security." AI is framed as something which can provide China with new competitive advantages in national defence through making "leapfrog developments." Although not explicitly stated within the AIDP, this is likely a response to perceived US hegemony and is a natural evolution of the Chinese military strategy of the 1990s and early 2000s, which was characterized by developing asymmetric capabilities that target US vulnerabilities (Kania 2017). This contrasts with the EU's strategy where competitiveness is explicitly outlined in terms of investment, development, and use.
The aim of developing AI for defence is largely absent from the EU's strategy, and in fact is explicitly excluded from the AI White Paper, with the Commission placing a greater focus on fostering international dialogue (European Commission 2018). Likewise, the Parliament suggests that the EU should focus on cooperation and take a leading role in developing an international governance framework for military uses of AI (European Parliament 2020).
This distinction may seem stark. China's aims, and the practical development of military uses of AI (Pecotic 2019), could be considered detrimental for promoting stability, trust, and the safe deployment of AI systems (Scharre 2019;Taddeo and Floridi 2018), whilst the EU's strategy could be read as a proactive attempt to promote cooperation. However, such an analysis would be overly simplistic. The EU's European Defence Fund has earmarked between 4% and 8% of its 2021-2027 budget to address disruptive defence technologies and high-risk innovation (Csernatoni 2019b), and individual Member States such as France have outlined their intention to develop military AI technologies (Franke and Sartori 2019). Efforts are also beginning to emerge to establish collaboration with NATO on the use of AI capabilities for defence purposes (NATO 2020).
When considering the EU's approach to AI for defence, one may argue that it arises from the political reality of being an intergovernmental/supranational organization, as much as from any ideological reservations (Kagan 2007). That is to say, the EU's approach to the use of AI for defence purposes, or lack thereof, largely permits individual Member States free rein to pursue the technology in this dimension. Similarly, there are well-documented reservations amongst Chinese officials about the potential that developing AI for defence purposes might have for conflict escalation (Allen 2019), though these concerns are generally outweighed by a fear that asymmetric AI advances by the US military could lead to heightened vulnerability (Fedasiuk 2020). Consequently, capability and threat perception should also be considered underlying factors in the distinction between China and the EU's approaches to AI for defence.
These high-level differences regarding what each government aims for provide important points of divergence between the national AI strategies of China and the EU. However, it is only through considering AI governance at a more granular level, particularly in regard to how the respective governments go about enacting their aims and whom these policies are meant to benefit, that the substantive differences in approaches become clear.

China's high-level aims
In China, several central bodies provide funding for AI, including the National Natural Science Foundation of China, National Key R&D Programs, Megaprojects, and Government Guiding Funds (Colvin et al. 2020). However, it would be misplaced to view the development and use of AI in China as being controlled entirely by the central government. Instead, the development and implementation of AI is guided by incentive mechanisms which encourage public and private sector actors to follow the agenda set out in the AIDP.
In terms of the private sector, the Ministry of Science and Technology created an "AI National Team," a group of technology companies that are endorsed by the Chinese government as "champions" for researching and developing specific applications of AI. The first companies were selected in November 2017 and included Baidu, which was tasked with the development of autonomous vehicles, and Tencent, the anointed champion of computer vision for medical applications. The "national champion" status provides companies with preferential access to government projects and associated data. It is worth noting that this does not exclude the involvement of other companies, as governmental policies also foster open innovation and the sharing of resources, ideas, and data domestically. Indeed, in addition to integrating their technologies into the real economy and providing annual progress reports, AI national champions are expected to act as open innovation platforms that smaller companies can use. Champions are also required to establish industry standards in their areas of specialism. Currently, there are 15 national champions encompassing a range of sectors (Larsen 2019).
This private sector development is complemented by a governance model of "fragmented authoritarianism," which delegates some powers to local government vertically, and shares power horizontally between central government agencies (Mertha 2009;Zeng, 2020). The role of local governments is of particular note here. There has been a concerted effort by Chinese cities and provinces to outline their own local AI strategies (Ives and Holzmann 2018), which is unsurprising given the strong incentive system from central government which encourages the development of local AI industries and the implementation of AI applications. Short term limits for provincial politicians and promotions based partly on economic performance create incentives for local politicians to comply with central government policy and to undertake regional experimentation in an attempt to boost local economies (Xu 2011). In regard to AI, this entails implementing a long "wish list" of technologies outlined within the AIDP (Lee and Sheehan 2018), with the AI National Team and other technology firms acting as willing partners in this endeavor. As will be outlined more fully in the next section, considerations of ethics are currently limited within this model. This is not to imply unfettered promotion of AI development and use in China. The AIDP explicitly outlines the importance of establishing "laws, regulations and ethical frameworks to ensure the healthy development of AI." In recent years, soft measures that utilize social opposition to curb corporate excess have been relied on. This has been especially noticeable in regard to consumer privacy, where there has been repeated public backlash against excessive data collection and misuse (Roberts 2021). Legislative measures are now being introduced to complement this social opposition, which is indicative of a "responsive authoritarian" style of governance (Heurlin 2016) that seeks to understand and address citizens' concerns, within reason. As will be discussed in greater detail in the next section, this includes the Personal Information Protection Law (2021) and the Data Security Law (2021) that were introduced, at least in part, to bolster consumer privacy protections.
The recent emphasis by the Chinese government on achieving "common prosperity" 8 has led to an intensified use of these mechanisms, with public pushback and governance interventions emerging in areas ranging from the exploitation of gig economy workers to antitrust crackdowns against large technology companies. Importantly, despite this recent intensification of efforts, the measures introduced appear to be largely ad hoc, in terms of the extent to which the crackdown will be prolonged (Wang 2021) and the types of practices that will be targeted. This is perhaps an inevitable consequence of balancing efforts to support innovation, respond to and utilize populism, and further enhance central government control.

EU's high-level aims
The EU's approach to developing AI differs from China's and centers foremost on harnessing its regulatory and norm establishing power to define clear boundaries for the ethically acceptable development and use of AI (Csernatoni 2019a). This was initially outlined in the Communication, which stressed that AI should be developed in line with EU values, and was further clarified by the HLEG. The HLEG's Ethical Guidelines and associated Assessment List provide clarity for practitioners as to how they can bring their systems in line with European values (European Commission 2020). This guidance will be strengthened through the regulatory provisions of the draft AI Act, which explicitly outline the levels of risk that different AI technologies pose, and will place restrictions on each level accordingly when this Act passes.
On top of establishing norms, the EU employs mechanisms for guiding and incentivising both the private and public sectors. For the private sector, the EU promotes research and development into AI through investment mechanisms, including InvestEU, the Digital Europe programme, the European Investment Fund, and Horizon Europe (the successor of Horizon 2020). Some of this investment is used to fund specific AI research; for instance, Horizon 2020 provided funds for AI applications, including the establishment of a €35 million research fund to support firms developing medical imaging AI technologies for oncology (European Commission 2019). However, the EU's approach is not limited to R&D funding. Of particular note, is the proposed development of Digital Innovation Hubs, co-funded by Member States and the EU, which would seek to support businesses in their digital transformation efforts by providing access to technical expertise, training and innovation services. Controversially, some applications have received funding from the Horizon 2020 initiative seemingly contravene fundamental rights, like "iBorderCtrl" deception detection systems in airports (Jupe and Keatley 2020).
There have also been efforts by the EU to encourage and coordinate public sector aims amongst Member States. Within its Coordinated Plan, the Commission promotes the development and publication of national AI strategies, including investment and implementation plans. Alongside this, the Plan stresses the importance of bi-annual meetings to coordinate actions across Member States, the creation of monitoring and participatory bodies and the development of common regulatory standards to avoid market fragmentation. By coordinating efforts between Member States, the EU hopes to maximize the impact of investments, encourage cooperation, and exchange best practices. The HLEG's Policy and Investment Recommendations seek to provide further guidance to Member States as to where they should focus their efforts. Substantial progress has already been made, including through the creation of the European AI Alliance and AI Watch, bodies established to solicit feedback from a diverse set of stakeholders and to monitor AI development, respectively. Finally, the recent AI Act proposal will establish a European AI Board, comprised of Member States' national competent authorities, the European Data Protection Supervisor and the European Commission, which seeks to facilitate the consistent implementation of the Act across the EU.

Comparing approaches to achieving outcomes
These contrasting approaches to promoting the development and use of AI provide another point of comparison. China's approach relies on a system of innovation incentives for public and private actors, and then uses ad hoc measures to curtail harms when they are seen to arise. In contrast, the EU's strategy seeks to set ethical parameters and provide upfront support to help Member States and private sector companies succeed within said parameters. As a result, it is worth comparing the success of these mechanisms in achieving the mutual high-level aims of positive economic outcomes, international competitiveness, and ethical governance, which were identified as important in both China and the EU.
When comparing these implementation mechanisms, the EU's model has generally been praised for its focus on protecting fundamental rights; however, there have been several detractors of the EU's rights-based approach. Typically, the EU's strategy has been criticized for being too focused on rigid and potentially innovation-stymieing governance measures. This has led commentators to argue that the EU's prescriptive approach may not be robust to longer-term changes (Roberts, Babuta, Morley et al. 2022) and that an oversized focus on protecting fundamental rights may harm efforts to stimulate a competitive AI market (Brattberg, Csernatoni, and Rugova 2020). The EU's approach has also been criticized for providing inadequate protections of fundamental rights. Veale and Borgesius (2021), for instance, stress that the task of fleshing out the details of many of the standards that high-risk AI systems in Europe will follow will largely fall on European standards bodies. This risks undemocratic governance by technical bodies which many interest groups within society have historically had difficulty engaging.
In contrast, scholars have generally praised China's policy approach on the grounds that it has led companies to adopt and deploy AI technologies (Braw 2021;Candelon et al. 2020). That said, a number of drawbacks can also be identified with China's model for achieving its aims of economic and social prosperity, global competitiveness, and "the healthy development of AI." Although the innovation focus of China's model is often praised, from an economic perspective, the power afforded to these national champions is likely to limit domestic competition. By allowing national champions to develop standards that affect wider industry rules, innovation ecosystems can be tailored to national champions' strengths (Leeds 1996), despite commitments to "open innovation." 9 Viewing this policy from the perspective of the AIDP's other stated aim of promoting international competitiveness helps to rationalize this strategy, with the promotion of national champions helping China to incubate first-tier companies that define international standards and "set the rules of the game." Regulators are thus left with the challenging task of balancing promoting platform innovation and international competitiveness on the one hand, with ensuring that market abuse does not take place and that the influence of business leaders is checked on the other (Yu 2020;Zhang 2021). Recent moves to crackdown on Chinese technology companies through measures in the name of "common prosperity" indicate that a desire for control is currently winning out. However, these tough measures could come at the cost of market innovation.
Criticisms could also be leveled against how some aspects of the "wish list" approach play out in the public sector. In terms of achieving positive economic outcomes, fierce competition has emerged between local governments to attract AI companies and top talent, with the latter in relatively short supply (Ives and Holzmann 2018). The lack of coordinative oversight between Chinese provinces and cities has led to a disjointed approach, extensive waste, policies of localism, and fraudulent companies taking advantage of a willingness to invest (Gan 2021;Zeng 2020). There is some coordination between local governments that help to check these excesses, such as the creation of specific National New Generation Artificial Intelligence Innovation and Development Pilot Zones which encourage a focus on specific specialisms (Murphy 2020). However, these Pilot Zones have drawbacks. For example, Ding (2020a) doubts whether the narrow specialisms of some of the smaller zonessuch as the city of Hefei's focus on speech recognition-are diverse enough to attract an adequate number of startups to sustain the ecosystem at large. More broadly, questions remain as to whether a centrally-defined "wish list" model, if followed closely, is the most effective allocation of resources in a fast-moving field such as AI (Nelson 2019).
The extent to which China's "wish list" approach, as currently enacted, will produce ethical outcomes can also be challenged. If implemented with societal consultation and engagement, there is the potential to provide clear and ethical guidance to companies and local governments about the types of technologies to introduce and how. However, this model of innovation provides few mechanisms for taking into account the needs of all relevant stakeholders, suggesting it is unlikely to directly serve the needs of all citizens (Simon 2019). Whilst China's "responsive authoritarian" model may be effective at introducing ad hoc protections in response to some concerns, it is unlikely to address effectively the needs of minority groups, such as the disabled, who are not explicitly mentioned in the AIDP (Parkin 2019). Subsequent regulation does show promising signs that the needs of minority groups are being considered by the government, with the importance of inclusive design for "intelligentized services" outlined in Article 15 of 2021 Data Security Law. Similarly, the Guidelines for Artificial Intelligence Ethical Security Risk Prevention specifically emphasize that paying special attention to vulnerable groups is a key element of risk prevention. Nonetheless, the crackdown on numerous civil society groups and other marginalized constituencies within China, which might otherwise advocate for the development of AI that is inclusive of different communities (Lucero 2020), is indicative of the limitations of relying solely on high-level provisions.

Whom AI is meant to benefit
A final important difference between the EU's and China's approach to AI governance becomes apparent when considering whom the development and use of AI is supposed to benefit. Throughout EU policy documents, there is an explicit focus on fostering "human-centric" AI, which rests on the foundational values of the EU: "respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities" (European Commission 2018). The ideas of human autonomy, dignity, and rights as understood within the EU model focus on the individual as the predominant unit of analysis. This is not to say that other benefits are not considered. The Commission's AI White Paper explicitly states that AI should not just bring benefits to the individual but also to society, for example, by achieving the UN Sustainable Development Goals, and through developing European business. Nonetheless, these aims are not weighted equally, with societal and business interests largely subsidiary to protecting the rights of individuals.
Despite high-level similarities with the EU's aims, Chinese governance documents take a different approach as to whom AI should benefit. In the AIDP, individual rights are rarely mentioned as the focal point, 10 with a clearer emphasis placed on the benefits that can be brought to China in terms of the country's international competitiveness, economic development or societal improvement. In terms of societal benefits, for instance, the focus in the AIDP is on "social construction," with explicit references to using AI for "preserving social stability" and "grasping group cognition." Considering the mechanisms to achieve these aims in practice reveals tensions with individual rights, such as the introduction of the widely reported yet currently underdeveloped Social Credit System(s); this system can be understood as an attempt to nudge individual behaviors to achieve outcomes that the government consider societally beneficial, notably, public trust (Creemers 2018;. 11 Consequently, the AIDP can be seen as "human-centric" in a different sense, namely one that places a greater emphasis on the benefits that can be brought about to China as a state and/or society, with a focus on the individual largely secondary. Analyzing the sets of ethical principles that have been released by China and the EU, and their interpretations in practice, including through the regulatory mechanisms that have substantiated them, make the differences in whom AI is meant to benefit more clear-cut. In 2019, the HLEG put forward seven high-level principles for ensuring the ethical use of human-centric and trustworthy AI that is aligned with the EU's values. Whilst other principles have been released within the EU, by both member state government bodies and the private sector (Jobin, Ienca, and Vayena 2019), the HLEG's principles act as the cornerstone of the EU's approach.
In China, there is not a single set of government-endorsed ethical principles comparable to those produced by the HLEG. The Ministry of Science and Technology (MOST) have been most active in this space, endorsing the Beijing AI Principles for Responsible AI Research and Development, which were developed by the Beijing Academy of AI. Shortly after, the Governance Principles for the New Generation Artificial Intelligence were released by the National Governance Committee for the New Generation Artificial Intelligence, a body established by MOST. In September 2021, this body published the Ethical Norms for the New Generation Artificial Intelligence, which explicate these principles at a high-level across the AI lifecycle. Whilst the work by MOST appears to be authoritative in this space, it is worth recognizing the role of other ministries. A "Joint Pledge on AI Industry Self Discipline," which contains a number of AI ethics principles was released in 2019 and championed by the Ministry of Industry and Information Technology. Additionally, a committee under the Chinese Ministry of Civil Affairs is discussing developing AI ethics principles (Gal 2020).
At a high level, there appear to be a number of similarities between the ethical principles produced by the EU and the principles outlined in these documents within China (see Table 1), something that is consistent with a general coalescing of AI ethics principles globally (Floridi and Cowls 2019; Jobin, Ienca, and Vayena 2019; Zeng, Lu, and Huangfu 2018). However, a deeper examination of the principles makes the differences in whom AI is meant to benefit more explicit, with a far greater emphasis placed on the individual in the EU's approach.

Harmony
At a high level, the notable difference between the sets of principles released by the EU and those within China is the inclusion of "harmony" within the latter, which is mentioned in two of the three government-endorsed AI ethics principle-sets. The New Generation AI Principles promote human-machine harmony, which speaks to a wider aim of guaranteeing that AI development advances the well-being of humankind, including ensuring the protection of social security and respect for human rights. In the Beijing AI Principles, harmony is promoted in the sense of cooperation between disciplines, sectors, organizations and regions, in order to avoid a malicious arms race and to embody a philosophy of "optimising symbiosis." Beyond these high-level principles, how to interpret harmony, a term loaded with political and philosophical connotations, is contested. Some scholars have argued that because AI is not a "natural" development, it should be guided and sometimes suppressed, so as to respect the natural ways of life and to ensure harmony between humans and nature (Song 2020). Others emphasize the potential that AI has to become cognitive living systems (Zeng, Lu, and Huangfu 2018), meaning that it is important for humanity to be in harmony with AI itself, including a notion of empathy: "what human (sic) do not want AI to do to human, human should not do unto AI" (Zeng 2018). Finally, harmony could relate to a moral obligation of the individual to the whole (Jiang, Han, and Lan 2019).
In contrast to a rich variety of academic perspectives, there is little clarity from government documentation as to what role harmony should play in governing AI. Whilst the notion of harmony present within the Beijing AI principles is indicative of cooperation in governance, that of the New Generation AI Principles is vaguer. This ambiguity is significant, as

Human agency and oversight
Including fundamental rights, individual agency, and human oversight.

Harmony and friendliness
The goal of aI development should be to promote the well-being of humankind as a collective. Privacy and data governance Including respect for privacy, quality, and integrity of data, and access to data.
Respect for privacy aI development should respect and protect personal privacy and fully protect the individual's right to know and right to choose.

Diversity, nondiscrimination, and fairness
Including the avoidance of unfair bias, accessibility, and universal design, and stakeholder participation.

Fairness and justice
The development of aI should promote fairness and justice, protect the rights and interests of all stakeholders, and promote equal opportunities.

Societal and environmental wellbeing
Including sustainability and environmental friendliness, social impact, society, and democracy.
Inclusion and sharing aI should promote green development and push forward the transformation and upgrading of all walks of life. Accountability Including auditability, minimization, and reporting of negative impact, tradeoffs and redress.

Shared responsibility
establish an aI accountability mechanism to clarify the responsibilities of developers, users, beneficiaries, etc. Transparency Including traceability, explainability, and communication.
Safety and controllability aI systems should continuously improve transparency, explainability, reliability, and trustworthiness. Technical robustness and safety Including resilience to attack and security, fall back plan and general safety, accuracy, reliability, and reproducibility.
Agile governance respect the natural laws of aI development; while promoting the innovative and orderly development of aI, search for and resolve risks that might arise. Open and collaboration encourage exchanges and cooperation across disciplines, domains, regions and borders.
there has been a precedent of the Chinese government co-opting a notion of harmony to fill the perceived ideological and moral vacuum left by the cultural revolution, whilst at the same time using it to reassert state authority and control (Wang, Juffermans, and Du 2016). A politicized notion of harmony could be taken to promote and justify a collective good over the protection of individual rights (Thornhill 2019). This is not to condemn harmony as an ethical principle or the promotion of some social interests over the individual per se. The debate on individual rights within China is distinct from that present within the West, suggesting a potentially higher threshold for the infringement of certain individual rights (Angle 2002;Perry 2008). Moreover, for achieving socially desirable outcomes, some notion of harmony could be beneficial for the EU, particularly in regard to protecting the natural ecosystem and promoting sustainable development (Li et al. 2016). Instead, it is to point out that an idea of the harmonious whole is being prioritized over the individual in Chinese AI policy-in stark contrast to the EU's approach-and that there is both precedent and evidence to suggest that a politicized interpretation of the term could be co-opted for the CCP's benefit.

Privacy
Whilst the presence of the word "harmony" is a relatively easy difference to spot (the difficulty lies in its interpretation), other differences between the AI ethical principles espoused by the respective governments only become apparent at the point of translation from principles to practices (Whittlestone et al. 2019). Principles can inform policies, policies can steer practices, but ultimately practices reveal the true interpretation of the principles. So which principles and interpretations governments choose to prioritize is revealing of the underlying values endorsed by the respective governments and, consequently, whom they believe should benefit from AI. There are, for example, revealing differences in how China and the EU deal with tensions between privacy, fairness, and security.
Consider privacy. The EU's General Data Protection Regulation (GDPR) enacts the strictest privacy protections globally. It curtails the ability of companies to share personal information without a lawful basis; typically, an individual providing valid consent. These regulatory protections focus on protecting the individual. However, there are important exclusions that can be used, such as for public health reasons. China's comparable legislation, the Personal Information Protection Law (PIPL), passed in August 2021, appears on the surface to provide individuals with many of the same stringent protections as within the GDPR. It includes informed consent provisions and fines of up to 50 million RMB ($7.75 million) or five percent of annual turnover for those in violation of the law. Like in the case of the EU, exclusions apply for consent and notification measures, such as when these will "impede state organs" fulfilment of their statutory duties and responsibility" (Article 35).
Despite these apparent similarities, the actual protections each regulation affords are notably different. Mass surveillance, which is at its most extreme in the province of Xinjiang, provides the clearest example of how infringements of citizen privacy are unlikely to be curtailed by PIPL. Since 2009, the minority Uyghur population have been subject to an array of invasive surveillance technologies, including widespread facial recognition technology and a compulsory phone application that automatically detects the viewing or authorship of "subversive" content. Although in practice, coordination problems have limited the ability of the technology to live up to the government's aims (Leibold 2020), the introduction of these technologies contradicts a holistic right to privacy and displays the CCP's greater willingness to exploit AI technology in the name of national security than the EU.
An optimist might hold that the introduction of PIPL will lead to less invasive surveillance measures, yet like previous privacy initiatives within the country, PIPL provides strong consumer protections, but it appears unlikely that these will extend to comprehensive citizen protections . There is overlap in the topics covered by the PIPL, the Cybersecurity Law (2017) and the Data Security Law (2021), revealing tensions between the protection of individual rights and public security that allow for the utilization of exclusions within PIPL (Shi 2020). More importantly, despite a concerted effort to codify data protections, ultimately there is a lack of meaningful checks on the power of the CCP (Horsley 2019;Li 2012). This contrasts with the EU, where equivalent vague provisions within the GDPRsuch as those related to public health-are clarified by judicial review. Thus, despite a commitment to protecting individual privacy in the AIDP, subsequent ethical principles and other regulatory documents, the protection of citizens' privacy rights is valued less highly when compared to the EU's approach, with a greater focus on protecting consumer rights and society, or, if viewed with skepticism, the power of the CCP itself.

Fairness
Fairness provides another example of divergence in terms of how principles are being discussed and implemented in practice. In the EU, policy discussions about AI fairness have been wide-ranging. In the HLEG's Ethical Guidelines, it is stressed that fairness has both substantive and procedural elements, with the prior including a just distribution of benefits and costs, whilst the latter entails the ability to contest and seek redress against algorithmic decisions. To ensure fairness, it is emphasized that inclusion should be present throughout the whole AI lifecycle, that the means are proportionate to the ends when using AI and that systems are accountable.
When it comes to applying this policy position in practice, the EU has taken active steps through the proposed AI Act, which bans AI systems that it perceives as posing the risk of discriminatory outcomes and the exclusion of minority groups, such as social scoring systems (Article 5). On top of this, it is also explicitly stated in the preamble that high-risk AI systems should be accompanied by instructions on use, including concise information on the risk of discrimination, where appropriate. That being said, some have criticized the AI Act for not going far enough in mitigating potential harms from bias in high-risk systems, with the text surrounding disparate impact assessments in the AI Act vague and non-committal, with little in the way of formal requirements for checks on bias (MacCarthy and Propp 2021). Nonetheless, it is evident that a holistic understanding of AI fairness is emerging that is guided by equality amongst individuals, proportionate systems and a right to redress, and that as further work is undertaken to flesh out these measures, interpretations will be bounded by these considerations.
Chinese AI ethics principles also emphasize the importance of fairness, with the New Generation AI Principles, for instance, outlining that AI should promote fairness and justice, protect the rights and interests of all stakeholders, and promote equal opportunities. Until recently, policy related to AI fairness, beyond these high-level principles, was comparatively underdeveloped (Guo and Hui 2020;Wu, Huang, and Gong 2020). In 2021, a flurry of policy measures that consider and seek to remedy bias have been released. The aforementioned PIPL (2021) explicitly states that fairness and justice must be maintained in automated decision-making. The Ethical Norms for the New Generation Artificial Intelligence emphasize that in the research and development stage of the AI lifecycle, a diversity of demands should be considered and ethics reviews strengthened when collecting data and developing algorithms. The Cyberspace Administration of China's regulation "Internet Information Service Algorithmic Recommendation Management Provisions" provides the most tangible effort to ensure fairness in consumer outcomes by prohibiting "unreasonable differentiation in terms of transaction prices or other transaction conditions…based on consumers preferences, transaction habits, or other traits" (Article 21).
Beyond the above provisions, there are few specific measures to mitigate harmful biases, yet there is a clear emphasis in these policy documents on empowering consumers, which could be seen to indirectly target these ends. For instance, PIPL guarantees that individuals have the choice not to be targeted based on their characteristics for "push delivery" decision-making, and that individuals have the right to an explanation and to refuse a decision based solely on automated decision-making when it has a major influence on the "rights and interests of an individual" (Article 24). Similarly, the draft regulation for recommender systems states that users should be provided with an option to turn off recommendations and "functions for selecting or deleting user labels used in algorithmic recommendation services that target their personal traits" (Article 17). 12 While these provisions have the potential to empower consumers to challenge unfair outcomes, depending on how they are implemented, they could in fact lead to more unequal outcomes, with only those who are technically literate able to utilize them (Tsamados et al. 2022).
While similarities can be drawn between the EU's and China's fledging measures to address the issue of fairness, when reflecting on the current application of algorithmic systems in certain areas, it is apparent that interpretations of fairness differ. Consider national security, where the Chinese government places a heavy emphasis on ensuring political stability and control (Feng 2013). This emphasis impacts the policies of local officials who are, as mentioned, the ones practically implementing AI systems throughout China. There is a strong incentive for local officials to avoid serious mistakes whilst also justifying their budgets for ensuring order (Pan 2020). This prioritization has encouraged the pervasive introduction of surveillance systems, including Uyghur-identifying facial recognition systems (IPVM Team 2021). These systems have reportedly been used in at least three provinces, with police departments in 16 provinces and regions requesting similar technology since 2018 (Mozur 2019; Van Noorden 2020).
The calibration of these systems is also revealing. Because of the incentives to ensure order amongst local politicians, recall (i.e., ensuring all harmful actors are identified) is prioritized over precision (i.e., ensuring that a high proportion of those identified are harmful). As Pan (2020) argues, the underlying logic is that it does not matter if some people who are targeted pose no threat, so long as, on average, there is a larger proportion of potential threats identified. However, there are major costs to the local officials for missing potentially threatening individuals. This is indicative of broader thinking about risk management within the country, with public order valued more highly than the rights of individuals and minority groups (Zeng 2020). Neither these discriminatory systems nor the way in which they are being calibrated would be considered "fair" under the EU's standards on account of a lack of equality of outcomes and proportionality.
Healthcare is another area where concerns over AI fairness may differ. The EU is relatively hesitant regarding the adoption of AI for healthcare. Although healthcare is promoted as one of the key "beneficent" uses of AI, it is also well-recognised that AI poses a significant threat to patient safety and so adoption is being carefully controlled through various regulations, including the "Medical Device Regulation" which came into force in May 2021.
In contrast, China is keen to promote the adoption of AI for healthcare technologies, to help compensate for its significant healthcare workforce shortage, which is particularly problematic in rural areas. Although in theory this rapid adoption of AI for healthcare technologies could help reduce healthcare inequalities, the lack of "checks" in place mean there is also a considerable risk of the use of AI widening inequalities (Guo and Hui 2020). For example, 21 Chinese hospitals began using IBM's "Watson for Oncology" in 2016. The system was trained on data from a single US hospital, then improperly localized to a Chinese population. It was therefore less accurate when used in the diagnosis and treatment of Chinese patients. Improper modeling risks compound for less represented groups, for whom availability and quality of medical data is variable or indeed poor. Data concerns alone would significantly widen the inequalities even from an accuracy perspective, even before other "qualitative" inequalities, such as empathetic care, are even considered. However, the unique contextual challenges and tangible pressures of the Chinese healthcare system also present a unique opportunity, which may in the long term shape a stronger emphasis on substantively fair and equitable outcomes, including when compared to the EU.

Recommendations for improving ethical outcomes
The EU and China have begun to develop distinct visions for what a "Good AI Society" should look like. The EU's model is characterized foremost by its promotion of fundamental rights that guide and bound the types of innovation that are permissible. China's model has focused more heavily on promoting innovation for national and societal benefits, but red lines are increasingly being introduced in areas where consumer rights or CCP interest is perceived to be threatened. Both strategies are still evolving, providing considerable scope for best practices to be replicated and adapted across contexts. A sign of good governance is an ability to learn from other contexts and understand where gaps are present, and where the wrong tradeoffs have been made, leading to suboptimal outcomes.
Having now outlined what the aims of each strategy are, the mechanisms introduced to meet these aims, and whom each strategy is seeking to benefit, we can now turn to consider which policies can be emulated and adopted by the EU and China to increase the likelihood of ethical outcomes. In line with our ethical pluralist framing, our recommendations provide guidance that we hope may help each government achieve their high-level aims and complement existing measures in place.

Recommendations to Chinese policymakers
Before substantial progress can be made in developing effective frameworks for the ethical governance of AI in China, it is necessary to delineate responsibility more clearly within government. Multiple bodies in China, with overlapping responsibility, have indicated their interest in AI ethics by endorsing or signaling an intention to endorse different sets of ethical guidance, indicating that the current failure to ensure the ethical governance of AI may, in part, be a symptom of the "fragmented authoritarian" model. Whilst several government bodies taking interest in AI ethics is undoubtedly positive, uncertainty over who is authoritative in this space is problematic, as it provides little guidance as to which ethics frameworks should be followed or who should go further in providing guidance.
To ensure an effective governance ecosystem for algorithms, the Chinese government should go further in outlining an overarching vision for how it believes AI ethics should be developed and implemented within China, including greater clarity about who holds responsibility amongst different government organizations. While the joint statement from nine government agencies on algorithmic governance is a promising step in this direction, the document provides little in the way of signaling responsibility amongst agencies, nor does it provide a common foundation for guiding ethical governance measures. With this in mind, there are two overarching options for achieving this recommendation in practice. On the one hand, the government could outline an overarching regulatory framework, similar to the EU AI Act, that considers governing the AI ecosystem cross-sectorally. If enacted properly, this approach would successfully guide the AI ecosystem in providing holistic protections that would move China beyond its seemingly ad hoc approach. However, the prescriptive nature of this approach could also come with drawbacks, such as a lack of long-term agility when dealing with the changing nature of AI.
On the other hand, an alternative would be to allow government agencies to take the lead in governing AI, but to provide an overarching framework that guides their initiatives. This approach and could be achieved through an authoritative and properly explicated set of principles, which could act as a common foundation for consistency between, and cooperation amongst, government agencies. This second approach would be more agile and would allow for greater respect of context-specific norms. It would also be more consistent with the agency-led approach that China has followed to date. However, for this sector-based approach to be successful, ensuring that regulators have the appropriate resourcing, expertise, and coordinative capabilities is key.
Regardless of which of these options is selected, lessons can be taken from the EU's experience of governing AI. The EU AI Act is globally the first attempt to introduce a comprehensive cross-sector regulatory measure for AI. A risk-based framework for AI that distinguishes between high-and low-risk AI technologies and applications, such as was done in the EU AI Act, could provide companies with the right regulatory environment for innovation through ensuring that high-risk AI technologies are regulated, without also bundling in less-risky use cases. This clarity would help improve existing initiatives that seek to govern recommender systems, with the current wording of the draft regulation leaving open the possibility that Douyin's (China's TikTok) algorithm be banned because it promotes addiction to content (Toner, Triolo, and Creemers 2021). Focusing on developing a cross-sectoral regulatory framework, as is being done in the EU, would also allow Chinese governance initiatives to move beyond the current ad hoc unpredictability, which harms innovation. This is not to say that the EU has found the perfect formula; there is significant scope for China to improve upon the EU AI Act, with provisions such as a ban on "manipulative systems" also vague (Roberts, Cowls, Hine et al. 2021).
Though the EU has not chosen to follow a sectoral approach to regulating AI, the EU's experience from the High-Level Ethics Group could also act as a model for introducing baseline cross-sectoral guidance. This group, which consists of academic, private sector and third sector expertise, worked together to define authoritative principles and implementation guidance that provided cross-sector guidance. It is evident that the Next Generation Artificial Intelligence Governance Committee is inspired by this model, yet its outputs are currently falling short of the HLEG's. The principles released by the Next Generation AI Governance Committee, as well as the subsequent ethical norms are extremely high-level and vague. Indeed, these two documents combined are even shorter than the initial set of AI ethics principles released by the HLEG. More importantly, it is questionable whether these principles are authoritative across government agencies and the private sector. For this governance committee to effectively provide a foundation for a cross-sectoral approach, it will need the endorsement of various government ministries 13 who have begun work into AI ethics. Accordingly, the HLEG can continue to act as a governance model that can be emulated in China; however, the equivalent Chinese initiatives need to be properly supported and coalesced around by government agencies, so as to ensure they provide a coherent basis for agency-led governance going forward. If successful, this could act as the basis for achieving a key aim outlined in the AIDP: to codify ethical protections by 2025.

Recommendations to EU policymakers
When considering what the EU can learn from China on the ethical dimension, the list is more limited given that the latter has until recently mainly focused on promoting innovation. Nonetheless, there are a number of areas for consideration. The area where China has evidently been superior thus far has been in promoting its own domestic technology companies and curtailing their power and influence where behavior is perceived to be unethical, amongst other reasons. There are a multitude of reasons behind why China is able to assert this control, many of which are out of reach or are undesirable for the EU (e.g., influencing and utilizing public opinion), yet one strategy that could be adopted and has been promoted by some commentators is the development of AI national champions (Arcesati et al. 2018;Candelon et al. 2020;Elliot 2020). This model could in theory lead to the incubation of European companies that are more aligned with EU values, in turn lessening reliance on foreign companies. If successful, this could lead to companies operating in the market that are more willing to follow EU's vision for a "Good AI Society." In practice, however, such a suggestion may be misguided. Chinese technology companies that have been given national champion status are already internationally successful; there are fewer such European companies (Roberts, Cowls, Casolari et al. 2021). Historic efforts by the EU to develop national champions in other sectors have ended in failure (Strange 1996). Equally as pertinent, it is questionable as to whether the development of EU "Big Tech" would lead to the creation of companies that are more willing to follow regulations. Indeed, it seems likely that the array of tools that the Chinese government has to control its technology companies are more influential in checking behavior than them being Chinese per se.
Whilst the national champion model appears to be a red herring, there are other, more specific policy areas where EU policymakers could learn from China. One notable example of this is the greater consideration given to systemic risk in Chinese policy documents. In the recent TC260 ethical guidelines that outline the security risks of AI, "out-of-control" AI that exceeds the intentions of research developers is listed first. This is reflective of many wider narratives in the Chinese AI ecosystem that foreground the importance of control (Allen 2019). The EU's Digital Services Act (DSA) obliges that very large online platform conduct risk assessments about systemic risks (Article 26) and take steps to mitigate those risks (Article 27), yet in the more recent EU AI Act there was a notable absence of consideration of how AI could lead to wider systemic change, which may not be fully addressed within the current provisions (Whittlestone et al. 2021). In the forthcoming period of revisions to the DSA and AI Act, it would be beneficial for the EU policymakers to address this gap through bolstering the measures that target systemic risk, for instance through improving oversight mechanisms for the internal risks assessments proposed in the DSA. The promotion of regulatory markets for AI assurance is one way that this could realistically be achieved, given regulator resourcing constraints (Clark and Hadfield 2019).
Another area where the EU could potentially emulate China is in some of its provisions regulating recommender systems. Article 29 of the DSA obliges very large online platforms to explain in their terms and conditions the main parameters of their recommender systems and to outline the options a user may have to alter parameters or reject profiling, if a platform chooses to make these options available. This transparency is a step forward; however, embedding a few lines within terms and conditions that users are often unwilling to read, while also failing to oblige platforms to offer meaningful choices on recommender systems, is inadequate for achieving positive outcomes. China's regulation on recommender systems is a step forward in this regard in that it mandates that companies should provide consumer choice over these parameters and the option to switch off recommendations all together. The EU stands to learn from this enhanced provision, yet its effectiveness in practice will be dependent on the design of recommender systems and the interfaces where users are given their modification choice. The sheer number of parameters and the complexity of their weighting could overwhelm users or give them a false sense of transparency over how the system functions (Helberger et al. 2021). As the enactment of the regulation of recommender systems matures in China, EU policymakers should watch closely to understand the measures taken.

Recommendations to both Chinese and EU policymakers
One policy area where both the China and the EU could improve is in undertaking better public engagement when formulating policy. In the EU, there is a risk that the governance measures being introduced will be misaligned with citizens' interests. Large American technology companies have become the biggest spenders in lobbying Brussels policymakers (Kergueno et al. 2021), which could lead to a problematic watering down of provisions in the draft EU AI Act, as was previously seen with the AI White Paper. Even if this watering down is avoided, the proposed reliance on standards bodies to flesh out many value-laden provisions could still threaten the efficacy of the AI Act, given standards bodies' inexperience in the field of ethics and the high technical barriers for individual or interest group engagement (Veale and Borgesius 2021).
In China, there is also the risk that policy will be misaligned with citizens' interests. Public consultation takes place on most AI regulations, including the CAC's Regulation on Recommender Systems and TC260's Ethical Standards for AI. However, the consultation period is short, with the former lasting a month and the latter only lasting two weeks and not sparking significant societal debate (Xue and Jia 2021). Moreover, with a restricted interest group ecosystem, the ability to feed meaningfully into future AI policy documents is necessarily constrained.
To help overcome these issues, effective citizen engagement is needed. The technical nature of AI may make this difficult, meaning public polling and other such light-touch engagement may have limited worth. However, this is not a reason to avoid undertaking meaningful engagement to understand what citizens think. Likewise, engagement should not solely rely on interest groups, who are being significantly outspent and outcompeted by industry in the EU and are hamstrung by China's political system, limiting their ability to represent plural interests effectively. One promising option is to develop citizens assemblies on AI that represent the full diversity of China and the EU respectively. These assemblies could be trained by a diversity of experts to understand key policy issues related to AI over a number of months, with participants then voicing their opinions to policymakers. The Ada Lovelace Institute's Biometrics Council (in the UK) provides a good example for this type of engagement and offers a blueprint for how Chinese and EU policymakers could more effectively solicit public opinion on AI governance. Through improving participation in this manner, and by adopting the other recommendations laid out in this section, China and the EU can make significant progress in achieving their visions of a "Good AI Society." Notes 1. https://www.oecd.ai/dashboards 2. In the rest of this article, we shall use "China" or "Chinese" to refer to the political, regulatory, and governance approach decided by the Chinese national government concerning the development and use of AI capabilities. The same holds true for the "EU". 3. Most of the section "Whom AI is meant to benefit" of this article is dedicated to discussing how, in practice, interpretations of such values have differed in the EU and China. 4. A purely normative recommendation would include absolutist suggestions like representative democracy being a necessary condition for ethical AI. This is not to imply that process-based recommendations are value-neutral. By their very nature, they are also embedded within our own particular worldviews. 5. Four more countries have since joined this initiative. 6. Disclosure: Luciano Floridi was a member. 7. This is not to say that discourses have solely been focused on innovation. Some national newspapers have published on AI and the rule of law, see (Xinhua 2018) 8. Common prosperity is a term that dates back to the Maoist period but has recently been adopted by Xi Jinping in reference to curbing the inequalities that were allowed to grow through China's rapid development (Yao 2021). 9. Compare this to the EU, where standards are developed through industry dialogue, meaning sectoral dominance by a single company is more difficult. 10. One of the few exceptions to this regards privacy, a point that we will develop further in the next section. 11. Despite intentions to have a national-level social credit system by 2020, its implementation is currently fractured and includes a national blacklist, local trials, and a commercial financial system (Liu 2019). 12. As will be discussed in the conclusion of this article, the provisions here resembled a strengthening of the Digital Services Act (DSA), which states that very large platforms that use recommender systems "shall set out in their terms and conditions, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters that they may have made available" (Article 29). 13. Failing this, the issue would need to be tabled to the State Council. For a full discussion, see the Shirk (1993) analysis of the "delegation by consensus" model.