Dreams Lab: assembling knowledge security in Sino-Dutch research collaborations

ABSTRACT Amid concerns over the rivalry between Washington and Beijing, the discourse and practice of knowledge security have become prevalent in Europe. This is especially true with regard to Sino-Western research collaborations on emerging technologies. Despite the scientific and economic benefits, these collaborations are increasingly perceived as a potential threat in the context of broader concerns with so-called hybrid threats. Knowledge security has emerged as a key term to identify and mitigate the risk of espionage, unwanted knowledge transfers, censorship, and the misuse of dual-use technology. To understand knowledge security and its implications, the article offers a qualitative, in-depth case study of Dreams Lab in the Netherlands: an AI research project run by the University of Amsterdam and the Free University of Amsterdam and funded by the Chinese company Huawei. Li’s practices of assemblage are used as an analytical framework to answer the question: how and why a diverse group of actors were brought together to respond to Dreams Lab and govern scientific knowledge on emerging technologies? By analysing the discourse and practice of knowledge security, the article offers crucial insights into how the great power rivalry is shaping scientific research and the international exchange of knowledge and technology.


Introduction
Amid concerns over great power rivalry, the discourse and practice of knowledge security have gained prevalence in the Netherlands and Europe more broadly. Over the past five years, the term has been increasingly used by governments, think tanks, universities, and journalist to refer to the security and ethical challenges associated with international scientific collaborations with non-like-minded countries. Knowledge security has become especially relevant in light of the significant growth in Sino-European collaborations. To position itself as a leading scientific nation, China has implemented an active policy to transform its higher education and scientific research since the 1990s (van der Wende et al. 2020, pp. 3-8). This has led to a sharp increase in the quantity and quality of Chinese academic publication, making Chinese universities attractive research partners for Western counterparts (Vennekens and Demirel 2021). Working with Chinese research partners, for example, provides access to means, 1 talent, and expertise. In the case of the Netherlands, most collaborations entail bottom-up initiatives by individual researchers that lead to co-published articles, the exchange of PhD students, or more structural research projects (The Netherlands Enterprise Agency 2020).
Despite some benefits, working with Chinese research partners is increasingly perceived in Europe as a potential threat or as unethical. This is especially the case for collaborations on advanced technologies, that have become the subject of great power competition. Exemplary is the debate in Europe and the United States (US) about China's ambition to become a world leader in the field of Artificial Intelligence (AI) (Overly and Heikkilä 2021). Perceived threats when working with Chinese universities include undesired knowledge transfers, intellectual property theft, the covert influencing of individuals, compromising ethical standards or principles of scientific integrity, and the misuse of dual-use technology. 2 Although a complete ban is considered unrealistic and undesirable, subject experts call for more awareness at an individual, institutional and national level and make recommendations to prevent breaches in knowledge security. 3 The growing prevalence of knowledge security reflects a broader shift in the US and European security perceptions. Over the past decade, countering so-called hybrid threats has become a top priority in international security policies and practices (Nilsson et al. 2021, p. 1). Definitions abound, but hybrid threats are often used by policy makers to describe the nonviolent grand strategies of so-called revisionist states, notably Russia and China (Caliskan 2021, p. 44). Such strategies may include disinformation campaigns, cyber-attacks, election meddling, the influencing of diaspora, but also intellectual property theft or the influencing of individual scientists. Restricting, through securitising, knowledge-sharing practices are seen as a way to mitigate these hybrid threats.
Though the problem of knowledge security is not unique to the Netherlands, the country presents an interesting case study to better understand the discourse and practice of knowledge security in Europe. The Netherlands has actively pursued a policy of internationalising its scientific research and higher education and China represents one of its key international partners. As a result, Sino-Dutch collaborations have increased significantly during the past decade (Vennekens and Demirel 2021). At the same time, the country is confronted with the current geopolitical rivalry and is pressured by the US to take a more assertive stance toward China ( van Wijnen 2020a). This has made knowledge security of particular relevance to the Netherlands, generating a lot of public attention.
Popular attention for knowledge security has been fed by a number of Sino-Dutch collaborations that have caught the public eye. In 2019, for example, the public news programme Nieuwsuur revealed that PhD candidates from the National University for Defense Technology in China were conducting research on potentially dual-use technology at the Technical University in Delft (Haan and de Kruif 2019). The institutional affiliation of the PhD candidates and the nature of their research raised concerns over the transfer of advanced, technological knowledge to the Chinese army. A year later, an AI research project run by the University of Amsterdam (UvA) and the Free University of Amsterdam (VU) and funded by the Chinese telecommunication company Huawei 4 triggered a heated debate. Besides concerns over the transfer of advanced technological knowledge on AI to China, the collaboration was also considered problematic due to Huawei's alleged complicity in the oppression of the Uyghursa religious and ethnic minority living in the Chinese province of Xinjiangby developing surveillance technology for the state. DREAMS Lab, as the research project is called, not only triggered a debate in politics, the media, and in academia, but it also functioned as a catalyser in the formulation and implementation of knowledge security policies. Because of this, the article analyses the DREAMS Lab project as an in-depth case study of knowledge security.
With its analysis of DREAMS Lab, the article makes three contributions. To begin with, it contributes empirically by giving insight into how knowledge and technology are increasingly securitised amid the current geopolitical rivalry. Specifically, it traces how knowledge security was formulated and implemented in the Netherlands by an assemblage of actors. At a more conceptual level, the article demonstrates that knowledge security entails more than a securitising discourse with a policy outcome. Knowledge security also represents a set of practices that constitute a governing of the international exchange of knowledge and technology. Theoretically, the article uses the concept of assemblage in order to explain how knowledge security emerged. Assemblage is understood here as a socio-material formation that consists of a multiplicity of actors that are brought and held together by a shared threat perception to have a governing effect (Demmers and Gould 2018, p. 367). This approach reveals how knowledge security was assembled by government ministries, universities, think tanks, politicians, and journalists who, in the process, had to find common interests and solve difficult dilemmas despite their diverging perspectives. Taken together, the article contributes to a growing body of International Relations (IR) literature (e.g. Acuto and Curtis 2014, Leese andHoijtink 2019, Bellanova et al. 2020) that draws inspiration from Science and Technology Studies to better understand the impact of emerging technology in international politics. Specifically, the assemblage-inspired analysis of DREAMS Lab answers the call of scholars like Calcara et al. (2020) for a new research agenda to better understand the international, political, economic, security and normative aspects of dealing with emerging technologies.

Knowledge security
Despite the contemporary focus of the article, it is useful to begin by pointing out that science, technology and security are of longstanding interest to academic research and state policy. This is aptly illustrated by dual-usea term that is used to refer to technology that can serve both civilian and military purposes. Dual-use came into usage in the context of the proliferation of nuclear technology following the Second World War (Martins and Ahmed 2020, pp. 59-60). Since then, the scope of dual-use has expanded with emerging technologies such as AI posing the latest challenges to existing dualuse mechanisms. Securing technologies like AI has become of particular interest to international actors given the "leadership potential provided by revolutionary tech-driven innovation and its conversion into economic and military power" (Calcara et al. 2020, p. 1).
Notwithstanding, the role and impact of emerging technology remain insufficiently conceptualised in much of the IR literature that has tended toward a deterministic understanding. Bellanova et al. (2020, p. 89) argue that technology has often been understood as an instrument and that science and politics were long considered separate. Leese and Hoijtink (2019, p. 8 emphasis in original) explain that IR scholars have primarily been interested in "technology as a tool that has the capacities to amplify power, foster processes of globalization, or play a role in the emergence of norms and identities" and have shown "surprisingly little interest in unpacking technology". With unpacking, Leese and Hoijtink (2019, p. 3) mean studying how technologies are constructed and implemented and taking into account the "politics that go into technology, as well as the politics that emanate from technology".
An exception is the work of Rychnovská (2016Rychnovská ( , 2020 on the securitisation of scientific research on pathogenic viruses as dual-use knowledge. She explains that after the terrorist attacks on the US in 2001, the risk of terrorists using publicly available scientific research to create biological weapons became an important reason for closer governance of the life sciences. Using critical theory, Rychnovská seeks to "unpack the security logic behind the governance of life sciences" (2016, p. 311, emphasis added).
Dual-use is an important aspect of knowledge security and there are a number of insightful parallels between Rychnovská's research on dual-use knowledge and the case study on DREAMS Lab. For example, she identifies converging and conflicting interests at stake in the governing of dual-use knowledge. While security and ethical concerns converge to justify stricter regulations, they conflict with the belief in the free exchange of scientific knowledge as a driver for social and economic progress. Rychnovská also observes a shift from governing the production of scientific knowledge in a military context, toward an effort to govern the circulation and consumption of scientific knowledge. In other words, the emphasis is placed less on regulating the production of technology in a military context and more on controlling access to the flow of scientific knowledge in general. As a result, Rychnovská argues that the governance of dual-use knowledge increasingly entails practices of risk management and self-governing by scientific researchers and has led to an expansion of what falls under the scope of security oversight.
In her above-mentioned work, Rychnovská takes a practice-oriented approach. Though she draws inspiration from securitisation theory, Rychnovská (2016, p. 318) argues that a practice-oriented approach enables her to study the "security driven changes in the governance of science from a more complex perspective, involving multiple actors, routes and sites of securitization, and thus the broader implication of this process". This article takes a similar approach to understand the emergence of knowledge security in the Netherlands. However, this article differs in two important ways from Rychnovská and other scholars like Calcara et al. (2020) working on the governing of emerging technologies in Europe. First, the article does not focus on a particular type of dual-use technology or knowledge. Rather, it focusses on international collaborations in higher education and scientific researcharguably a crucial aspect of the international circulation of scientific knowledge. Second, the analysis does not focus on practices of knowledge production or circulation as such. Instead, the article uses Li's practices of assemblage to analyse the different practices that make the governing of international scientific collaboration possible by a diverse set of actors. As such, the article foregrounds the "practices that are often implicit in studies of government but seldom examined in a focused manner" (Li 2007, p. 264). Taken together, the article unpacks the discourse and practice of knowledge security to provide insights into how scientific knowledge is governed and intervened upon. These insights are crucial to understand how shifting security perceptions and the Washington-Beijing rivalry are shaping international scientific research.

Assemblage
As explained above, this article analyses the emergence of knowledge security and the reaction to DREAMS Lab by using the concept of assemblage, which originates from the philosophical work of Gilles Deleuze and Felix Guattari (1987). Assemblage has been adopted in a wide range of disciplines and has gained popularity in international social theory during the past two decades. In their volume, Reassembling International Theory, Acuto and Curtis (2014, p. 2) argue that assemblage offers a break with existing international theories that rely on categories like the state, city or society to study social reality. In the study of security, many scholars use assemblage as a structural metaphor to describe social-material formations that have emerged in response to contemporary security challenges. The best-known example is Abrahamsen and Williams' (2009) study of how private security contractors operate as part of a global security assemblage.
As opposed to using assemblage as a structural metaphor, this article draws on the work of De Goede and Simon (2013) and Demmers and Gould (2018) to analyse how and why assemblages emerge around shared threat representations. They define assemblage as "social and material formations" (De Goede and Simon 2013, p. 317) that consist of a "multiplicity of actors" that are brought and held together "under the cloak of a particular 'threat presentation' to achieve their objectives and have a governing effect" (Demmers and Gould 2018, p. 367). Crucially, assemblages are understood as emergent. That is to say, they are the result of "interactions of the often conflicting elements and external connections that constitute them", meaning that assemblages are inherently unstable and contingent (2018, p. 367). In addition, Demmers and Gould (2018, p. 368) argue that assemblages emerge driven by forces of privatisation and globalisation and are, therefore, "neither purely global/local nor purely public/private". Finally, De Goede and Simon (2013, p. 317) observe how the shared threat representation can project "potential threatening futures in order to enable security action in the present" giving the assemblage a pre-emptive logic. In sum, assemblage is an innovative concept to navigate our rapidly changing social reality and to study how and why perceived security threats (present and future) are responded to by groups of actors that blur existing local/global and private/public distinctions.
Like De Goede and Simons and Demmers and Gould, this paper takes a practiceoriented approach to assemblage and draws on the work of Tania Murray Li. To understand why a particular governing formation is formed and how it works, Li identifies six practices of assemblage that together form an analytical framework. In this way, Li (2007, p. 265) attempts to make explicit governing practices that go beyond the mere problematisation of a particular issue. These practices include (1) the forging of alignments through a shared threat representation, (2) the authorising of knowledge by identifying a body of requisite, expert knowledge, (3) the rendering technical of particular issues by problematising them and suggesting needed interventions, (4) managing tensions and contradictions in the assemblage by reframing them as superficial, (5) the containing and steering of political debates and finally, (6) re-assembling, or the adding, subtracting and rearranging of the components of the assemblage. Applying Li's framework to a different research object (Li used it to study community forest management in Indonesia) will help further develop practice-oriented approaches to assemblage in the field of IR.
Li's six practices offer a clear framework to analyse the response to DREAMS Lab and the emerging discourse and practice of knowledge security. Analytically, attention is drawn to how assemblages emerge, function and have a governing effect. In other words, the processes of assembling and their effects are foregrounded over the resulting assemblage as such. Li's practices not only help trace who is in-or excluded from the assemblage but also how the parties to the assemblage work together to "produce desired outcomes and advert undesired outcomes" (Li 2007, p. 264). Concretely, applying Li's framework to knowledge security will entail studying how actors engage in the practices described above in order to assemble the actors, discourses, practices and technologies needed to make Sino-Dutch research collaborations safe and beneficial. As such, the emphasis is placed on the assembling of knowledge security, as opposed to a defined knowledge security assemblage.

Practice-oriented research method
By foregrounding the practices of assemblage, this article builds on the so-called turn to practice in International Relations. Conducting practice-oriented research prescribes an ethnographic-like research method, referred to by some as praxiography (Bueger 2014). Practices are defined in this context as "socially meaningful patterns of action, which, in being performed more or less competently, simultaneously embody, act out, and possibly reify background knowledge and discourse in and on the material world" (Adler and Pouliot 2011, p. 4). This definition of practice entails a confluence of the discursive and material world (Graeger 2016, pp. 481-482). Practice not only reifies and reproduces discourse, discourses are also changed and transformed through practice. The aim of practice-oriented research, therefore, is to study the discursive and material elements of a particular practice by analysing "speech, actions, and the usage of objects" in order to reconstruct its social meaning (Bueger 2014, p. 388). The three most common strategies for data generation include participant observation, document analysis and interviews. However, as was the case in this research, restricted access to physically study security practices often necessitates researchers to primarily rely on interviews and document analysis (Bueger 2014, p. 399).
Based on the practice-oriented research method described above, a case study research design was used to sample and collect the data. The article uses Lund's (2014, p. 224) definition of a "case" as an "edited chunk of empirical reality where certain features are marked out, emphasized, and privileged while others recede into the background". This means that the case study entailed a purposeful selection of data sources informed by the assemblage analytic described above. In other words, written documents and research participants were "sampled strategically" based on their relevance to the DREAMS Lab project to give a qualitative representation of the different actors engaged in the practices of assemblage (Mason 2002, pp. 123-124). This process of sampling continued until a point of data saturation was reached.
In total, 88 documents were collected and 22 interviews conducted (with 25 participants), which were analysed in SQR Nvivo. The documents were collected in two samples (see Appendix 1). Sample 1 included government documents, news articles from seven national newspapers and broadcasters, and relevant web content from the UvA, VU and Huawei, including articles from three university-affiliated media platforms that addressed the DREAMS Lab project. Sample 2 was collected through reference tracing based on Sample 1 and the interviews. This resulted in a selection of government documents and think tank reports that were either frequently cited in sample 1 or that were explicitly mentioned by the interviewees. An initial selection of interviewees was made that represented the main actors involved in DREAMS Lab project, including the UvA, the VU, the Ministry of Foreign Affairs (MoFA), the Ministry of Economic Affairs (MoEA), the Ministry of Education, Culture and Science (MoECS) and the Security Services. Additional participants were approached through snowballing based on their thematic expertise or their indirect involvement in the DREAMS Lab project (see Appendix 2). Finally, the documents and the transcribed interviews were analysed in a semi-inductive manner using codes that were developed based on Li's (2007) practices of assemblage. The code design also drew on the work of Sweijs et al. (2015) to operationalise concepts of security and on the work of Bowen (2009) on integrating interviews and documents in qualitative research. Integrating these two types of data contributed to the internal validity of the research (Hsieh and Shannon 2005, pp. 1280-1281.

DREAMS Lab
The UvA and the VU officially signed their agreement in May 2020, with Huawei committing a total of 3.5 million euros over a period of 4 years. The aim of the project is to research the use of AI to make search engine technology smarter, operational across different languages, and more conversational. Because Huawei is no longer allowed to use Google's software for its consumer products (including Google Search), developing new search engine technology is of particular interest to the company. To do so, the UvA and the VU are interesting research partners because of their expertise in combining data and knowledge-driven AI. In turn, by collaborating with Huawei, the universities were able to secure research funding and access the large amounts of real-life data needed to work on conceptual problems in the field of AI that also have a practical application.
When the UvA and the VU were approached by Huawei, the discussion over the company's role as 5G provider was in full swing in the Netherlands. Though DREAMS Lab is a collaboration with Huawei's consumer branch and has nothing to do with 5G technology, the professors leading the project decided to take a number of precautions. They consulted experts and fellow AI researchers and they met with representatives of the MoECS and MoEA to discuss their plans. The professors were also briefed by the National Coordinator for Counterterrorism and Security and the General Intelligence and Security Services about the potential risks of the project ( van Wijnen 2020a). In addition, the UvA and the VU made sure to contractually arrange financial responsibility, freedom of publication, intellectual property, the recruitment of staff, and the safe storage of the research data.
DREAMS Lab only became controversial when the national newspaper The Financial Daily (van Wijnen 2020a) began raising questions over the ethical desirability of the project and its possible implications for national security. The consternation did not cause the professors to abandon their project, but the DREAMS Lab case nonetheless catalysed public debate and the development of policy to make international research collaborations safe. DREAMS Lab is therefore used as the starting point to map an assemblage of actorsincluding universities, government ministries, technology companies, journalists, think tanks, and national research organisationsthat, despite their differences, have come to a shared interest in making Sino-Dutch collaborations safe, ethical, and beneficial for the Netherlands in the context of shifting Western threat perceptions (see Appendix 3). To understand the process of assemblage, the following five sub-sections offer an indepth analysis of the DREAMS Lab case and the response it triggered using Li's practices of assemblage. The fourth section combines two practices, due to a significant overlap.

Forging alignments: the China threat
The first practice of assemblage Li (2007, p. 265) is forging alignments, or the act of "linking together the objectives of the various parties to an assemblage". Crucial to this practice is the construction of a shared threat representation. As mentioned above, this process began with an article published in The Financial Daily ( van Wijnen 2020a). The article contrasts the UvA and VU's decision to work with Huawei against the concerns of the US and a growing number of European countries about the risk of espionage and data leakage. While neither the above-mentioned ministries nor the Security Services made a hard objection, according to the article, the project could have implications for the national security and economic interests of the Netherlands. The article reports that the government had previously warned Dutch universities that China "actively hunts" for the knowledge and technology it needs to pursue its geopolitical ambitions and that collaborating with Chinese partners can lead to so-called "undesired knowledge transfers" ( van Wijnen 2020a).
The Financial Daily article was quickly picked up and triggered a debate in politics and in academia about the desirability of the project. Members of Parliament (MPs), for example, demanded clarification from the government as the company was banned as a 5G supplier due to security concerns. MPs from the ruling People's Party for Freedom and Democracy demanded that the government end the project immediately ( van Wijnen 2020b). At the UvA and the VU a debate broke out over the ethical desirability of collaborating with Huawei, due to the company's alleged complicity in the state oppression of the Uyghurs in the Chinese province of Xinjiang. The ethical concerns were twofold. On the one hand, there was a concern that the AI technology developed by the DREAMS Lab project could be repurposed for surveillance as Huawei had previously tested and developed similar technology for local authorities in Xinjiang. On the other hand, collaborating with Huawei would symbolically gloss over its complicity in human rights violations and thereby undermine the principles the UvA and VU stand for. These concerns resonated widely in academia and by October, a collective of thirty Dutch scientists from different universities wrote a public letter, calling on the UvA and the VU to review the project on ethical grounds in the light of allegations against Huawei regarding its complicity in genocide (Hijink 2020).
The threat representation created in the public debate on DREAMS Lab is not uncontested. Policymakers, think tank researchers and even journalists questioned and nuanced the representation created of the research project. An interviewee from the MoEA, for example, explained that partnering with Huawei in an AI research project poses a very different security risk than allowing the company to supply the national 5G network (Interview 3). In addition, two interviewed policymakers and one journalist pointed out that no hard evidence has been found or made public that the company actually engages in espionage on behalf of the Chinese government (Interviews 1, 3 and 8). Given the advanced research already being done in China, a researcher at the Leiden Asia Center (LAC) also nuanced the risk of the undesired transfer of knowledge on AI (Interview 7). In their opinion, the potential misuse of the research for purposes that could violate human rights was a greater concern. Their concern was shared by an interviewed journalist working for the Financial Daily, who pointed out that the evidence for the human rights violation is much more robust (Interview 8). A number of interviewees also expressed their concerns over so-called "China-bashing" and the excessive attention in the media for all things related to China. "There are lots of reasons to take a good look at our relationship with China in many differ respects", a researcher working for the Hague Center for Strategic Studies (HCSS) explained reflecting on the public debate, "but precisely this case (DREAMS Lab) is a good example of how you can make a careful consideration [of that relationship]" (Interview 9).
Despite the above-mentioned nuances and critiques, there is a shared understanding in the assemblage that the rise of China poses both a strategic and ideological challenge. The threat perception of China, especially as a technological competitor, has been strongly propagated by the US. Under the Trump administration, the US began pressuring its high-tech allies (including the Netherlands) to take a clear position on Chinese technology companies like Huawei (Interview 6). Crucial to this threat perception is China's ambition to become a world leader in technology and position itself as a global power by 2049. These ambitions are reflected in China's national development programmes such as the Made in China 2025, the China Standards 2035, the Military-Civil Fusion strategy, and the Belt and Road Initiative. The strategic challenge China poses also contains a strong ideological component. The Chinese state and Chinese Communist Party (CCP) are portrayed by parties to the assemblage as an authoritarian regime that does not respect the ideals of Western liberal democracy and violates the civil and human rights of its own populationthe Uyghurs being a case in point. Without a clear distinction between public and private in Chinese societies, interviewees warned that working with Chinese research partners can compromise scientific integrity. There is no real independent science in China, a researcher working for the LAC explained, so the Chinese state can easily appropriate and repurpose research outcomes for surveillance or military ends. This understanding of the "Chinese system", the interviewee continued, is difficult to grasp for Dutch scientists who assume that their Chinese colleagues are as free and independent to conduct research as they are (Interview 7). The interviewed HCSS expert summarised the perceived challenge China poses to the West as fundamentally a clash of interests and that [o]n top of that there's a clash of ideologies, which is often added to give colour to the clash of interests and, well … to show that there's a good side and a bad side, and that we are the good side. (Interview 9) Building on the shared understanding of China as both a strategic and ideological challenge, DREAMS Lab provided an effective opportunity to forge alignments. Framing Sino-Dutch research collaborations as potential risk for Dutch national security, economic interests and academic integrity, for example, helped align the MoFA, the MoEA, and the MoECS in their response to the DREAMS Lab project. An interviewee at the MoEA explained that one of the key "takeaways" from the DREAMS Lab case was the close inter-departmental collaboration. This was reiterated by an interviewee at the MoECS, who explained that since the DREAMS Lab case the collaboration has improved between the Ministries in addressing the challenges of Sino-Dutch research collaborations. The ideological challenge China poses and the concerns over human rights violations has also helped align the academic community. The interviewee at the MoECS observed that it "strikes a chord with researchers" and that "one of the critical questions that a [university] can ask itself before they enter into a partnership, [is] whether it fits their moral values".

Authorising knowledge: state actors and hybrid threats
In order to make sense of and further justify the threat representation of DREAMS Lab and the notion of knowledge security, the actors in the assemblage drew on a body of expert knowledge. Determining what counts as expert knowledge is referred to by Li (2007, p. 265) as authorising knowledge: the act of "specifying the requisite body of knowledge; confirming enabling assumptions; [and] containing critiques".
In the case of Sino-Dutch research collaborations, reports published by think tanks played an important role. One interviewed journalist explained that the "perception of China has changed strongly over the past two years and I think that think tanks, at least on the subject of scientific collaboration, have played a very big role in that" (Interview 22). It was the work of think tanks like the ASPI (Australian Strategic Policy Institute), the LAC, the HCSS and the Clingendael Institute that made the journalist write about Sino-Dutch research collaborations in the first place. One of the authors of the LAC reports observed that their initial work in 2018 filled an important knowledge gap and that there remains a strong demand for more research (Interview 7). That same year, LAC and the HCSS conducted a research funded by the MoFA. To improve the awareness of scientists, the authors devised a checklist of ten questions, which was later used by the UvA and VU to assess their collaboration with Huawei (van Wijnen 2020c). The above-mentioned think tanks continue to inform the debate and policy on Sino-Dutch collaborations. When the Permanent Parliamentary Commission on Education and Science met in October 2021 to discuss the risks of Sino-Dutch collaborations, they were briefed by experts from the HCSS, the LAC, and the Rathenau Institute.
The concerns raised by the think tank reports are placed in a wider discourse on the changing geopolitical context and the growing assertiveness of countries like China, Russia and Iran. The report published by the LAC in 2018, for example, observes that: in the US, Australia, and New Zealand, but also increasingly in Europe, concerns about Chinese strategizing in areas like research and education are tied to deeper suspicions about China's rise and the country's "political influencing" or, as some even argue, "hybrid warfare" efforts. (D'Hooghe et al. 2018, p. 2) The way that the Ministries make sense of DREAMS Lab, follows a similar logic. In a letter sent to the Parliament, the Minister of Justice and Security (2019) explains that the risks associated with Sino-Dutch research collaborations are seen in the light of globalisation, far-reaching digitalisation and shifting global power relations. The Netherlands is characterised as an open society founded on principles of freedom, democracy, the rule of law and international cooperation. These founding principles have allowed the Netherlands to benefit from globalisation and digitalisation but also make the country vulnerable to state actors that exploit the open character of Western democracies to pursue their strategic interests. The growing assertiveness of state actors like China, Russia, and Iran, poses security challenges, including espionage, sabotage, foreign interference and disruption, cyber-attacks, and the undermining of social and democratic processes. These activities are not only directed at vital infrastructures, but increasingly also at so-called non-traditional targets like top-sector companies and research institutions.
Framing the risks associated with Sino-Dutch collaborations in this way by the government has two important implications. First, responding to DREAMS Lab becomes a matter of national security and constitutes part of the government's whole of society approach to counter hybrid threats. Second, the framing shapes how the threat is understood and responded to. As a hybrid threat, for example, DREAMS Lab should not be seen as an isolated event, but part of a coordinated effort by the Chinese state to harm the interest of the Netherlands in pursuit of its own interest. In addition, the emphasis on the potential materialisation of hybrid threats in a yet unknown future gives countermeasures a preemptive logic. Scientific research on high-end technology needs to be made resilient against attempts to breach knowledge security that may result in a future loss of technological advantage or in a technological dependency. Expert knowledge on hybrid threats, therefore, not only informs and justifies an intervention in scientific research but also renders undesired future outcomes actionable in the present.

Rendering technical: knowledge security
While the previous two practices focussed on creating a threat representation and essentially politicising DREAMS Lab, the threat was rendered technical as a knowledge security problem. Li (2007, p. 265) defines the practice of rendering technical as the act of "extracting from the messiness of the social world" a common problem definition that identifies a problem, solution, and a beneficial outcome. In other words, rendering technical entails the work of making a threat actionable by bringing together the perceived risks, identifying the causes of insecurity, finding a solution, and determining who is responsible. Based on the sampled documents, DREAMS Lab presented a variety of problems, including a lack of individual and institutional awareness, insufficient compliance to existing regulations, inadequate institutional oversight, weak cyber and data security, unclear government policies, a lack of operational guidelines, and a lack of general knowledge and expertise on China. These problems are brought together in the common problem definition of knowledge security.
As explained, knowledge security is a term that came into use to describe the risks of collaborating with research partners from non-like-minded countries. Interviewees in and outside government found it difficult to define the term and often referred to the letter sent to the Parliament by the Minister of Education, the Secretary of State on Economic Affairs, and the Minister of Justice and Security (2020). The letter was published shortly after the controversy over DREAMS Lab and warns for state actors that actively seek to acquire high-value knowledge and thereby potentially threaten Dutch national interests. In this context, knowledge security entails preventing unwanted knowledge transfers, countering foreign influence in higher education and research, and weighing ethical implications. To achieve knowledge security, measures are needed to improve institutional awareness and effective self-regulation by universities on the one hand, and establishing an effective and binding assessments framework by the government on the other hand that respects the core academic values: "academic freedom, scientific integrity, openness, reciprocity, accessibility and institutional autonomy" (2020, p. 3).
Knowledge security is not just a security concept but increasingly also represents a set of practices. The MoECS, for example, is developing operational guidelines in collaboration with national research organisations that will allow individual researchers and universities to effectively assess potential partnerships. 5 Parallel to these guidelines, the Association of Universities in the Netherlands (VSNU) has drawn up a framework that specifically spells out what knowledge security entails for universities. The VSNU is an important party to the assemblage in that it represents the interests of Dutch universities at a national and international level. Amongst others, the framework identifies concrete measures and practices that universities can implement to guarantee knowledge security such as restricting access to spaces (such as laboratories) with sensitive technology, conducting pre-employment screening of staff who come from high-risk countries, and addressing insider threat risks (2021, p. 31). Besides taking part in sector-wide initiatives, universities like the aforementioned Technical University in Delft are also taking their own measures, including the appointment of knowledge security advisors and conducting detailed assessments of ongoing or potential research partnerships that address topics like agenda setting, cross-cultural communication, academic integrity, due diligence, legal contracts, and intellectual property.
In the assemblage, there is an agreement on the need to create clear policy frameworks and operational guidelines, like the ones discussed above. However, there is always an inherent tension when developing policies in this area, as explained by a participant working for the VSNU: There is, of course, a call from politics for a clear and unambiguous answer, [but] there just isn't one. Moreover, politics always lags behind new developments, which is logical because it needs to follow a certain trajectory. Universities, by definition, want to respond to new developments. [Thus], what is needed is a very differentiated answer, one in which there is room for universities to act.
According to the VSNU, therefore, researchers should be able to assess their partnerships on a case-by-case basis to maintain their academic freedom and respond to new developments. The interviewees working for the government Ministries thought differently about the ideal approach. The interviewee at the MoEA, for example, also preferred a case-by-case approach and dismissed working with a rigid "black list" (Interview 3). In contrast, the interviewee at the MoFA felt that policy should make a clear distinction between what technology is open for collaboration and what technology is off-limits (Interview 1). At the time of writing, the MoECS is still working on the official guidelines that will likely include the government's official standpoint.
The developments surrounding knowledge security are neither unique to the Netherlands nor are they new, as the issue has been a matter of concern for some years now. The LAC, for example, has been publishing about the risks and opportunities of Sino-Dutch and Sino-European collaborations in higher education and scientific research since 2017. The LAC's reports show that well before DREAMS Lab happened, the Dutch government was investigating the issue and conducting stakeholder meetings. The reports also show that countries like Germany, the UK, Sweden and Australia have published guidelines for universities on international research collaborations.
While knowledge security helps render the threat representation of DREAMS Lab actionable, the term also obfuscates key political considerations. Knowledge security measures, for example, implicitly make a trade-off between security and academic freedom, a tradeoff that is inherently political. The act of rendering a threat technical, in other words, has a depoliticising effect. When such considerations are made explicit, however, the resulting tensions need to be carefully managed by the parties in the assemblage.

Managing tensions: responsibility, ideology and external dependencies
Indeed, the response to the DREAMS Lab case has not been without tensions. For both universities and government, there are conflicting interests at stake in international scientific research. This results in inter-actor and actor-specific tensions that shape how parties to the assemblage view the threat and how it should be addressed. Managing these tensions, therefore, is crucial to the continuation of the assemblage in its current form. To do so, the parties to the assemblage engage in a set of practicesreferred to by Li as managing failures and anti-politicsthat include resolving contradictions, devising compromises and containing the political question of how and what to govern (Li 2007, p. 265). These practices entail how the parties to the assemblage deal with questions of responsibility, ideological dilemmas and external dependencies.
One of the questions raised in the debate over the DREAMS Lab project was who is responsible for knowledge security. Actors in the assemblage agree that academic freedomincluding the freedom to decide who you work withis a great good that needs to be protected. Hence, the power of government is limited in this area and the primary responsibility lies with universities. What this means in practice, however, is a cause for debate. From a government perspective, an interviewee at the MoECS explained that with "the autonomy [of universities] comes a responsibility (…) to make a thorough assessment of the opportunities and the risks and to make sure that you are wellinformed" (Interview 4). The role of the government herein, they added, is to inform universities. Interviewees working outside the government, however, expressed their concerns that universities do not have the capacity, means, nor expertise to meet this requirement. In addition, interviewees working for the VSNU felt that the government is unjustly confronting universities with geostrategic and security problems that are the responsibility of politics (Interview 13). Expecting universities to deal with such questions is particularly unfair, according to two interviewees at the UvA and VU, because government policy has long pushed universities to collaborate internationally and to work with businesses. The interviewee at the VU explained: Dutch science has really been stripped down financially [and] pushed enormously toward business. (…) So yes, if you steer science in that direction, then scientists will act on how they are directed, they will start collaborating with companies. With Qualcomm, which is American, with Ahold, which is Dutch, (…) with Huawei, which is Chinese.
Within the university sector there is debate over how much the government should become involved. While some in academia emphasise the need for government support, others wish to limit government involvement as much as possible (Interview 7). Without debating the politics of responsibility, the above-mentioned framework published by the VSNU takes a step in identifying individual, institutional, and governmental responsibilities.
In addition, projects like DREAMS Lab confront both government ministries and universities with ideological dilemmas. Here too there is a shared understanding that academic freedom and the principles of an open society are a great good that needs to be protected. Parties to the assemblage also agree that international collaboration is necessary, especially with China. Yet, as was the case with DREAMS Lab, these values can conflict with national security interests and raise ethical questions. It differs per actor how these interests are weighed. For the MoFA and the Security Services, for example, security interests always weigh heavier than economic or academic interests (Interviews 1, 5 and 12). For the MoECS, open science and open access are the starting point for international cooperation, which should still be made as safe as possible (Interview 4). The interviewee at the MoEA took a more critical position and explained that our security is very important and you really have to protect it, but it's a grey area of how far you want to go. (…) We cannot afford to become some kind of autarkic state (…). This would undermine our business model and our security, that naturally go hand-in-hand.
In other words, weighing the interests at stake in projects like DREAMS Lab raises fundamental questions of how the state relates to society. From a university perspective, international collaboration may raise challenges in relation to academic integrity or human rights implications. As mentioned above, a collective of scholars objected to DREAMS Lab because of Huawei's alleged complicity in the oppression of the Uyghurs in Xinjiang (Hijink 2020). For other scholars, international research collaborations are one of the few remaining ways to still engage with colleagues in so-called unfree countries (Interview 13 & 21). "The worst thing we can do is slam the door shut", explained an interviewee at the UvA, "[a]nd the best way I can make a positive difference as an academic is to build bridges and open perspectives" (Interview 21).
Finally, tensions also arose from external factors and dependencies. One of these factors is that international scientific research is considered instrumental for the positioning of Dutch science. To illustrate this, an interviewee at the VSNU used the following analogy: [T]his is our chicken with golden eggs, so to say, that we are now going to protect in a cage. The chance is quite big that the chicken will stop growing along with the knowledge position that we build up through [international] collaboration.
In fact, China is the most important research partner outside Europe for the Netherlands. The aforementioned HCSS researcher explained that, "there is a lot to say for continued collaboration with China, because China has a massive reservoir of good scientists, a lot of money, and in a lot of areas they are (…) at the forefront of science" (Interview 9). Not collaborating with China is not considered an option, which implies a dependency.
Responding to the risks of Sino-Dutch collaborations, therefore, not only needs to take into account long-term considerations regarding the positioning of Dutch science but also needs to ensure that in the short term, Sino-Dutch relations are not jeopardised. According to two interviewees (Interviews 4 and 8) this has prevented the Dutch government from taking a hardline approach in its China policy.
(Re)assembling knowledge security Lastly, the response to DREAMS Lab is analysed as a continual (re)assemblage of existing discourses, actors and alliances, rules and regulations, concepts and security practices, and material elements. Li defines reassembling as "grafting on new elements and reworking old ones; deploying existing discourses to new ends; [and] transposing the meaning of key terms". In other words, reassembling looks at how the assemblage emerges and changes over time by introducing new elements and by rearranging, repurposing or subtracting existing elements. To illustrate this practice, this final sub-section looks at how the response to DREAMS Lab and the implementation of knowledge security measures relate to export control and, specifically, the concept of dual-use.
Export control itself represents an assemblage of actors, regulations, alliances, and practices focused on controlling the international flow of technology that has a military or both a civil and military use (dual-use). Export control not only applies to the trade of physical (or digital) technology, but also the exchange of knowledge, information, and expertise. Though this was not formally the case for DREAMS Lab, this means that international research collaboration can fall under existing export control regimes and international sanctions (Interviews 1 and 3). The application of export control to a wider variety of technology due to an expansion of what qualifies as dual-use and the discussions over 5G and DREAMS Lab have led the Dutch government to develop new instruments (Interview 6). These instruments include foreign investment screenings and knowledge security measures (Interviews 1 and 3). The VSNU knowledge security framework for universities, for example, identifies three areas of (inter)national regulation that Dutch universities need to comply with: international sanctions, export control and dualuse regulations, and (inter)national codes for academic conduct and integrity. Regarding export control, the VSNU's framework points to the recently published EU Regulation for Export Controls of Dual Use Goods. The new regulation pays greater attention to the human rights implications of sharing cyber-and surveillance technologyone of the objections made against DREAMS Lab.
The expansion of the controls over the exchange of technology and scientific research refers back to the geopolitical lens through which technology is seen, discussed in the introduction. Though the AI-driven search engine technology being studied at the UvA and the VU does not have a direct military use, it is still considered relevant from a national security perspective. The exchange of any knowledge deemed sensitive, therefore, needs to be strictly controlled. One of the interviewees likened the international competition over technology to the weapons race during the Cold War, but instead of nuclear technology, states compete in developing AI, quantum computing or big data analysis (Interview 1). At present, the interviewee summarised, geopolitics revolves around technology and the question is "who has the most advanced technology". In other words, "[t]echnology is power". In contrast to what the scientists working on DREAMS Lab believe, therefore, their work is increasingly politicised and they are being confronted with the consequences thereof (Interview 15).
Studying the response to DREAMS Lab as a process of reassembling offers two important insights. First, the analysis goes beyond the DREAMS Lab case and embeds contemporary knowledge security policies and practices in an existing effort to mitigate the risks of international research collaborations. Second, the analysis shows how knowledge security is part of a wider attempt to control the international flow of knowledge and technology that also includes export control, dual-use regulations, international sanctions and investment screenings. This reveals the governing effect the assemblage seeks to have, namely, to control and make safe the international flow of knowledge and technology.

Conclusion
The DREAMS Lab case study helps identify a number of key factors that allowed knowledge security to be assembled. First, in order to align the actors in the assemblage, the threat representation of DREAMS Lab was carefully constructed as a risk for national security, human rights and academic integrity. While the security reading resonated with the MoFA and the Security Services, for example, concerns with human rights violations "struck a chord" in the academic community. What brought these diverging concerns together was a shared understanding of China as a strategic and ideological threat. Second, knowledge security was made a national security matter by framing projects like DREAMS Lab as potential hybrid threats. This helped make sense of the scope and nature of the security threat as well as justify new security measures. Third, though DREAMS Lab was not terminated, the response to the project helped develop knowledge security measures as a new instrument to control the international flow of technology and knowledge. Fourth, the discourse and practice of knowledge security were shaped by internal tensions and external dependencies. Weighing the interests at stake confronted actors with dilemmas such as making a trade-off between security interests and the principles of academic integrity and open science, or between collaborating with China as a crucial scientific partner and complying with pressure from the US to take a more assertive stance regarding companies like Huawei.
These insights from the case study resonate with, but also differ from Rychnovská's work on dual-use knowledge in the life sciences. First, in the response to DREAMS Lab, a similar convergence can be observed of security and ethical concerns that, in turn, conflict with beliefs of science as a driver of social and economic progress. The case study gives insights into how these interests are weighed differently by the actors involved, which caused tensions within the assemblage. Second, as with the governing of the life sciences, knowledge security entails a form of self-governing that Rychnovská attributes to the ethicalization of security. In the DREAMS lab case, however, the government's limited mandate to intervene in scientific research and the principle of academic freedom were also key factors that led to a self-governing by universities. Third, the selfgoverning by universities appears to follow a similar logic observed by Rychnovská of controlling the circulation of scientific research. Nonetheless, the DREAMS Lab case also demonstrated that the continued circulation of science, particularly with countries like China, is crucial to Western science and innovation. Governing the flow of science is therefore informed by more than just a security logic but also by economic and academic interests. Finally, as was the case with the life sciences, governing research on emerging technologies is informed by a threat perception. In contrast though, threat perceptions have shifted from terrorism to inter-state competition.
The practices of assemblage proved a useful analytical tool to understand the emergence of knowledge security. Specifically, it helped break the governing of scientific knowledge down into concrete practices of (1) constructing a shared threat representation, (2) substantiating claims with credible knowledge, (3) aligning key interests, (4) designing new interventions, (5) managing tensions, and (6) reorganising existing relationships, policies, and practices. Amongst others, these practices have demonstrated how knowledge security has emerged at the intersection of local/global and private/public processes. In the DREAMS Lab case, the analysis also revealed a parallel process of (de)politicisation. While knowledge security, to a certain extent, politicises technology (in this case AI) by placing it in a geopolitical context, it also depoliticises the inherently political act of weighing security, ethical, and academic interests. However, the process of in-and excluding parties to the assemblage remained implicit in the analysis. Making this process more explicit in future analyses will improve insight into the working of power in the assemblage, which remains undertheorized in practiceoriented approaches.
Finally, the following recommendations can be made for future research. To begin with, a similar, bottom-up practice-oriented approach should be applied to other efforts to govern emerging technology and the flow of information as a result of shifting security perceptions in the West. This could include studying the response to Huawei's bid to supply 5G technology to European countries or Russian disinformation campaigns. Though different from the DREAMS Lab case, these cases similarly represent perceived security challenges related to emerging technologies in the context of inter-state strategic competition. Future research should also endeavour to include actors like Huawei that are the subject of threat perceptions in Europe and the US. Including their perspectives would not only result in a more symmetric analysis, but it would also contribute to a better understanding of the above-mentioned process of in-and exclusion in assemblages. Taken together, future research should aim to "unpack" terms like knowledge security, technological sovereignty, or resilience that have gained prevalence in Europe over the past decade. Doing so will help understand how shifting security perceptions are producing new policies and practices that are changing key social processes like scientific research and higher education.