UvA-DARE (Digital Academic Repository) On mapping values in AI Governance

We propose here a conceptual framework by which to analyze legal-regulatory problematics of algorithmic decision-making systems, focusing on mechanisms of value production in their design and deployment. An aim of our intervention is to develop an investigative model for application to algorithmic decision systems with regulatory effects, including predictive artificial intelligence applications and recommender systems that filter data and suggest courses of action. Technical systems that integrate complex algorithmic techniques perform critical and sensitive functions that are both object and instrument of regulatory governance, functions such as predicting behavior, steering information flows, assessing risk, etc. These functions, however, are not simple or static phenomena, but rather contextual, partial performances of complex socio-technical dynamics. One of our interests is to discern what is valorized in this new regulatory ecology. Accordingly, we are sketching a framework to target terms and tokens of value as they are produced, reproduced, incorporated, and translated among design processes, legal practices and background conditions structuring their use. Rather than asking which values AI should satisfy in contested governance contexts, we address conceptually prior questions concerning how values manifest and ‘map’ among context-sensitive computational and social processes in the first place. Furthermore, current research often takes for granted that an AI application is produced against the backdrop of a stable and pre-defined set of values and legal practices. Existing research does not yet adequately account for the ways in which laws and values as produced in and through the ecology of the AI application differ from idealized presuppositions assumed to preexist development of the latter. For the purpose, our contribution engages three broad lines of inquiry: one, we take forward calls for a materialized study of law, such as put forward broadly by Alain Pottage, and as put forward more recently and specifically with respect to computational technologies by Mireille Hildebrandt, among others; two, we contribute to the elaboration of a critical practice for AI, in the tradition of Philip Agre; and three, our attention to assemblages potentially contributes to debates over techno-regulation or regulation by design.


Introduction
We propose here a methodological orientation by which to analyze legal-regulatory problematics of algorithmic decisionmaking systems, focusing on mechanisms of value production in their design and deployment.Technical systems that integrate complex algorithmic techniques are increasingly per-mances of complex socio-technical dynamics. 2An aim of our intervention is to develop an investigative model for application to algorithmic decision systems with regulatory effects, including predictive artificial intelligence applications and recommender systems that filter data and suggest courses of action.Our interest is a fundamental one, to discern what is valorized in this new regulatory ecology.Accordingly, we are sketching a conceptual framework to target terms and tokens of value as they are produced, reproduced, incorporated, and translated among design processes, legal practices and background conditions structuring their use. 3ntervening in the overarching debate into whether and how AI may share or conform to democratic values, 4 our study aims at the question of how values are generated and distributed in concrete settings with regulatory consequences, where automatic and automated decision-making components encounter contextualized social processes. 5Rather than asking which values AI should satisfy in contested governance contexts, we address conceptually prior questions concerning how values manifest and 'map' among contextsensitive computational and social processes in the first place. 6To do so, we articulate methodological considerations with two focal points: the interaction of law and AI technologies; and the production of value in line with that interaction.For this purpose, we adopt the concept of assemblages, borrowing from Manuel de Landa's elaboration of assemblage theory, focused on 'wholes whose properties emerge from the interactions between parts', including material and semiotic elements. 7We will try to elaborate this progressively as we 2 Cf., L Amoore, The politics of possibility: Risk and security beyond probability (Duke University Press 2013). 3We focus here on the uptake of AI technologies in traditionally public sector activities by governmental institutions.We do that because that presents a clear domain to make our case about social value production involving law and technology, but there is no reason to limit the analysis to public institutions only.Amazon's recommender systems may drive value production as well as algorithmic decision systems used in tax administration offices. 4See, e.g., M Kuziemski and G Misuraca, 'AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings', Telecommunications Policy (2020): 101976. 5Though they are not all the same thing, we will use terms like AI, automated and algorithmic decision systems interchangeably in this article.In general terms, we mean to focus on decision systems that draw patterns from large data sets to generate solutions deemed optimal with respect to some problem, with subsymbolic AI programs foremost in mind.But in some cases, researchers will not necessarily know whether a decision system involves a subsymbolic AI architecture or some other algorithmic design.As we describe below, the research agenda that we sketch here may be productive even in those cases where aspects of the system are not transparent, and for that reason our interchangeable use of terms, with emphasis on AI.
6 sb:name 7 M de Landa, A New Philosophy of Society: Assemblage Theory and Social Complexity (Continuum, 2006) 5.The material and social or semiotic parts range 'from a purely material role at one extreme of the axis, to a purely expressive role at the other extreme.' (p.12) Further, they coexist in 'relations of exteriority', meaning that 'a component part of an assemblage may be detached from it and plugged into a different assemblage in which its interactions go along, but for unfamiliar readers, as a starting-off point, an assemblage is a composite made up of a multiplicity of elements featuring 'connections between semiotic chains, organizations of power, and circumstances relative to the arts, sciences, and social struggles'. 8For a nice, partial representation of an assemblage-in-a-nutshell, we can point to Kate Crawford and Vladan Joler's Anatomy of an AI System, which graphically renders Amazon Echo 'as an anatomical map of human labor, data and planetary resources'. 9Their rendering includes heterogeneous elements, from smelters and refiners, to internet infrastructure, to user voice recordings, to labor conditions, shipping routines and waste disposal.For another example, we will discuss the Dutch SyRI case, below, in which numerous government services databases were joined with the intention to produce warning flags for potential abuses such as fraud across applications for public services.In cases like that, the assemblage is quite large, encompassing myriad administrators, tools, protocols, citizens, users, norms, resources, relations, etc.At first glance, that may seem so large as to be unhelpful -but that is part of the point in turning to assemblage theory, to get a hold on such unwieldy constructions.It was precisely the purpose of the SyRI initiative to draw on diverse public services practices and thereby condition general public behavior, and it is the point of our intervention to discover ways in which social values are (re)produced in such socio-technical endeavors.
With respect to the interaction of law and AI technologies, we develop on the proposition that law is not defined outside of the assemblage, but by relations within it (ie., within the same overall assemblage in which the AI is developed and deployed). 10With respect to the production of values, value functions as a sort of bracketed term, setting off an absence or indeterminate space around which the material assemblage functions. 11By attending to law and values as parts of socio-material assemblages, we will foreground two things.First, how law and values are contingent on assemblage characteristics of network and flow, or constellated structures of nodes and connections, and how the dynamic movement of are different.'(p.10) Consequently, 'assemblages may be taken apart while at the same time allowing that the interactions between parts may result in a true synthesis.' (p.11) The interactions among component parts determines the viability of an assemblage, tending either to 'stabilize the identity of an assemblage, by increasing its degree of internal homogeneity or the degree of sharpness of its boundaries, or destabilize it.'(p.12) 8 The quote is borrowed from Deleuze and Guattari's related definition of the rhizome.G Deleuze and F Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (University of Minnesota Press, 1987), p. 7. 9 K Crawford and V Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources (2018), available at https://anatomyof.ai/ . 10On the generative productivity of assemblages, as well as some of the limitations of the notion, cf, J Puar, '"I would rather be a cyborg than a goddess": Becoming-intersectional in assemblage theory', PhiloSOPHIA 2 (2012): 49-66. 11Elsewhere, we observe this phenomenon of bracketed spaces doing regulatory work in institutional context, in D van den Meerssche and G Gordon, '"A new normative architecture"-risk and resilience as routines of un-governance', Transnational Legal Theory 11 (2020): 267-299.information and things among those nodes and connections is co-constitutive of the overall assemblage.Second, the assemblage is a heterogeneous and mutable construction, potentially encompassing multiple, disparate, and variable networks and things. 12uch of the analysis below proceeds in the code is law tradition, though we expand on limitations that we observe in that school of thought. 13Throughout, we analyze law, its constitution and its effects, as material phenomena.Analyzing law's materiality brings us within the ambit of other work, such as by Mireille Hildebrandt, under the complementary banner of law is (or as) code . 14In her analysis of law as code, to which we return in more detail nearer the conclusion, Hildebrandt takes the further step of analyzing law as information.This is an important vector of law's materiality.But it is one among several.While adopting Hildebrandt's insight, we also argue that limiting the material analysis of law to its character as information has had the peculiar effect of short-circuiting the material analysis, rather than deepening it.The consequence in Hildebrandt's work is to fall back on celebrated abstractions such as rule of law ideals.We mean here to extend the research framework, to take the material analysis still farther.On this basis, our intervention contributes to the active debate concerning feasible and desirable ways to regulate (with) AI applications at work in the public sphere, and the values appropriate to that endeavor. 15We focus specifically on ways to discern value-(re)productive interactions and mechanisms at work in cross-embeddings of AI-driven programs and social systems, with governance effects. 16To do so, we consider how value attributions denoting relative worth, merit, or importance, take form in ecologies of human and computational agents.
Our work here proposes a mode of investigation for fundamental research into the values that are produced and distributed in legal-technical interactions involving algorithmic decision systems.Much of this article will be devoted to establishing the theoretical and methodological groundwork for such research.But we believe that this mode of investigation holds substantial practical utility for both AI practitioners, including individuals and organizations that research, design and provide data for AI, and legal professionals in diverse roles such as regulators, judges and practitioners. 17Consider the le-12 Both of these elements derive from the seminal elaboration of assemblages in G Deleuze and F Guattari, A Thousand Plateaus , supra, n. 8; we also incorporate from related notions of desiring machines and desiring production in their related work, Anti-Oedipus (University of Minnesota Press, 1983). 13Larry Lessig's Code and Other Laws of Cyberspace (Basic Books,  1999) is the seminal work. 14M Hildebrandt, 'Law as Information in the Era of Data-Driven Agency', The Modern Law Review 79.1 (2016): 1-30. 15C Cath, 'Governing artificial intelligence: ethical, legal and technical opportunities and challenges', Philosophical Transactions of the Royal Society A (2018) 376: 20180080. 16We link this work with our work in other social domains.See,  eg, I Feichtner and G Gordon, Constitutions Of Value (forthcoming).We also connect to other scholarly traditions, such as the 'values in design' field, e.g., H Nissenbaum, 'Values in the design of computer systems', Computers in Society (1998) 38-39. 17 We note that the general research program we describe here would encompass particular socio-legal inquiries such as de-gal practitioner trying to fight action taken in part on the basis of a recommendation produced by AI.That action may concern a parole decision, or a credit rating, or inclusion on an asset-freezing black list, or stops at borders, etc. Focusing on the AI output alone will be minimally revealing.Focusing on the code and training of the AI may go only marginally farther -the code may be kept secret, the training dataset may be inaccessible, the training process and/or the model may have become inscrutable, the transparency of the algorithmic decision-making process may have little to say in terms of human, domain-dependent semantics, etc.Moreover, the AI does not operate in isolation.The practitioner will be better served by tracing the series of interactions in which technical operations and normative mandates are translated back and forth and from point to point in a complex assemblage of actors and actions. 18No single one of these interactions or points will determine the conditions that the practitioner aims to challenge.But likewise, neither are those conditions a generic sum or a simple additive product.Any actionable AI output will only be possible on the basis of discrete conditions (competences, pressures, material affordances, etc.) discoverable at different moments in the assemblage.Regulatory activities (adjudication, interventions), as well as socially-aware AI design, require acknowledging at a fundamental level the ecological nature of the socio-technical processes in focus.This is one of the principal aims of our method: to account for the assemblage in a way that reduces it neither to any one, arbitrary point (the AI, or the parole officer, etc.), nor to an undifferentiated whole.As a result, just as the academic will have a wider array of interactions among incentives, pressures, materialities and routines for analysis, the AI practitioner will gain reflective standpoints relevant for design, development and deployment phases, the legal practitioner will have more sites to contest, and the regulator and judge will have a fuller perspective on the normative conditions and stakes at play in any given outcome.By contrast, models that reductively match outputs with presumed values will be inadequate and misleading.
We develop our proposal in two main steps before concluding.First, in Section 2 , we elaborate the case for an expanded research framework.The section proceeds in three parts: first we establish the heterodox variety of sites pertinent to a study of legal-technical interactions and the values that they maintain; then we explain the need to get beyond traditional subject-object modes of (legal) analysis; and finally, we illustrate the benefit of new perspectives for research with an example, the recent Dutch SyRI case.In Section 3 , we describe the renewed analysis that we propose.The argument proceeds in four parts: first, with a brief overview of materialist analysis in law; second, with a focused look at a recent development in the materialist analysis of law, namely Hildebrandt's call to study law as information; third, we look at some consequences of applying this new perspective to live controversies in AI and governance today; and fourth, we explain the spescribed elsewhere in this symposium, by L van Rompaey et al., 'Designing lawful machine behavior:Roboticists' legal concerns'. 18For a cutting-edge example of this analytic at work, see G Sullivan, The Law of the List: UN counterterrorism sanctions and the politics of global security law (Cambridge University Press, 2020).cific technique of encircling that we adopt from security studies for the purpose of making values legible in complex sociotechnical assemblages.Section 4 concludes with the summary observation that a research program -even one informed by materialism -must go beyond traditional empiricism to address how the interaction of law and technology governs the production of value, even as terms of value govern the production of AI and regulatory decisions.Throughout, as part of our overall intervention in debates over questions of values in AI governance, our contribution engages three broad lines of inquiry: one, we take forward calls for a materialized study of law, such as put forward broadly by Alain Pottage, and as put forward more recently and specifically with respect to computational technologies by Mireille Hildebrandt, among others; two, we contribute to the elaboration of a critical practice for AI, in the tradition of Philip Agre; and three, our attention to assemblages potentially contributes to debates over technoregulation or regulation by design.

2.
The case for an expanded research framework

A heterodox socio-legal framework
Current research often takes for granted that an AI application is produced against the backdrop of a stable and pre-defined set of values and legal practices. 19Existing research does not yet adequately account for the ways in which laws and values as produced in and through the ecology of the AI application differ from idealized presuppositions assumed to preexist development of the latter. 20Laws and values performed in the design, deployment and regulatory construction of AI applications may be particular to the ecologies of their assemblage, not identical with aims and suppositions thought to preexist their institutional development.When we say that a law or value is performed, we mean that the thing in question (the law, the value) is constituted out of the material actions and interactions conducted in its name. 21On this basis, the notion of regulatory practice as an exogenous enterprise is one that we aim to get away from. 22Accordingly, we propose a research agenda by which to observe and examine how governance ef- fects are performed in the process of dynamic, interrelational activity, rather than prescribed in idealized goals and elusive institutional mandates.Thus, we seek to articulate a program by which to map the ways in which values are generated and distributed in contingent sites and interactions within the structured processes for developing and deploying AI applications at work in the public sphere.For the purpose, our method includes locating consistent passage points with a reflexive method of 'encircling', a research practice developed in security studies to delineate the material and discursive spaces in which inaccessible and indeterminate elements are incorporated, translated, and reproduced. 23Encircling will help us to delineate the discursive and operational spaces occupied with terms of value, and the sorts of constellated relations assembled around these spaces.We treat values, laws and regulation not as pre-given artefacts, but as embodied practices, generated in part out of the practices and material circumstances to which they are taken to apply.Our framework aims to observe laws and values that are endogenous-to and contingent-on the assemblage itself, rather than assuming in advance some autonomous possibility for what those values and laws might be or represent. 24n this basis, we mean to map the actual production and distribution of values at work in the design and operation of AI applications determining conduct in the public realm today.
Our project goes forward within the tradition associated with Lessig's Code is Law formula, but with one significant distinction. 25We are within the tradition insofar as we observe that (computer) code itself conditions behavior in ways typically associated with law and regulation, and that the law and regulation applicable to code must take account of the conditioning agency of code.But we break from the tradition insofar as it perpetuates an idealistic notion that law exists in some exogenous realm, to be channeled and applied with more or less fidelity -and we break from that tradition regardless of whether the representation is developed in the form of legal argument or a programming language.In its regulatory effects, law -like code -is dynamically determined by the way that it is embodied and enacted.Therefore, our framework works against an idealistic conception of law, which bifurcates the regulatory reality with a distorting division between the law and the technology under analysis.In this sense, we take up the code is law formula, but informed by the socio-legal research agenda proposed by Alain Pottage in 2012, and the rigorously materialist methodology that he proposes. 2623 M de Goede, E Bosma, and P Pallister-Wilkins, eds.Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (Routledge 2019). 24This problem has analogies with the symbol grounding problem, i.e. of how words/symbols get their meaning, which interestingly can be seen as one of the reasons of the success of machine-learning AI against symbolic AI.See e.  and Society  39 (2012): 167-183.We will progressively elaborate what a materialist methodology can mean and entail, but to paraphrase Pottage, materialist methodologies have explored the involvement of material things in the production of technical knowledge and social relations.Law, conceived in this way, is not merely 'a product of human agency or intentionality alone, or an effect of compromise between purely human institutions or collectivities.' 27Rather, a crucial vector of law and legal practice includes 'the kind of agency that is afforded by, elicited from, or ascribed to' technical objects. 28Here, we develop the point reciprocally: law and legal practice cannot be understood outside of material context, and technical systems cannot be understood independent of the laws and legal practices with which they are entangled.In what follows, we focus on the law and regulation applicable to AI within the context of the law and regulation exhibited by AI.But we comprehend that law and regulation -alongside the values mutually implicated with the law and regulation -as what emerges out of the assemblage of material parts, semiotic elements, structural constraints and socio-technical interactions that occur across partially overlapping zones of operation potentially characterized by different spatio-temporal scales, all of which goes into the assemblage and flow of algorithmic decision-making systems.

Against objectification and bifurcation
Let us restate: the formula code is law tells us that code is doing something that we associate with law, and that this complicates the notion of developing regulation and law applicable to code and code technologies.We agree that code conditions behavior in ways typically associated with the notion of law and that this complicates the application of law and regulation to code -but we ask the further complicating question: what if we take away recourse to the notion of law as something that clearly exists apart from the computer code?The proposition is not really limited to code: if code can be law, so can buildings and satellites and cigarettes and seasons.The number of things that can be part of law in this sense is countless. 29one of these things, however, operate alone in this capacity.
The regulatory dimension that each material thing embodies -whether human, machine, or something else -is the product of its interoperation or implication with other things.A door, for example, works to regulate movement only in combination with a wall or other obstacle. 30he infinite number of things that constitute law in their interoperation raises the question: does law ever exist outside of these interconnected embodiments that are otherwise treated as proxies?When we treat these concrete circumstances as proxies for law, we reproduce an ideal and idealistic 27 Ibid., p. 167. 28Ibid., p. 168. 29See, for example, the various possibilities considered in the special issue edited by HY Kang and S Kendall, 'Legal Materiality', Law Text Culture 23 ( 2019), available at https://ro.uow.edu.au/ltc/vol23/iss1/ . 30See, e.g., Bruno Latour's article written under the alias of Jim Johnson, 'Mixing humans and nonhumans together: The sociology of a door-closer', Social problems 35 (1988): 298-310.notion of law that exists outside and beyond the same concrete circumstances.By this notion, the idea of the law is disconnected from any concrete manifestation.We are up against something like the epistemic division of subject and object diagnosed by Kant and countless others (before and) since: the autonomous object is not a knowable one, except perhaps in a purely idiosyncratic, personal sense.Despite that long tradition of thought, lawyers and others continue to treat the law as though it has some reality independent of the specific practices by which law is made concrete in any given instance, so that each instance involves a representation of an exogenous law, rather than its living constitution. 31his chimerical condition -everywhere represented, so nowhere properly constituted -perhaps accounts for some of the contradictory characteristics commonly attributed to law and legal practice in the regulation of technology, as both omnipresent and irrelevant, alien and utterly banal, always in the way and always behind the times.Consider again 'code is law': the insight goes halfway to materializing law in the form of code.But in its usual reading, the code must be two things at once, a functioning code and a representative of law.This double identity creates two related dilemmas.The first and more obvious is that the conceptual separation between the two identities creates the space for misrecognition: the policymaker or critic may 'not get' the code; the code may 'get it (the law or policy) wrong'.The second dilemma is more subtle, and goes to the core of the problem with notions of exogenous law: no law is perfectly determinate, or announces its own determination in all cases; as a result, the bifurcated identity that is representative of law defies closure, which in turn creates tension with the other, the functional program that must serve as proxy for the normative end. 32Moreover, the regulatory effects ostensibly acknowledged according to 'code is law' become artificially disassociated from the supposedly functional identity -which was, however, the point of the code-is-law formula in the first place.As a result, the code becomes known as an abstract mathematical exercise, while its regulatory effects are treated according to its representative capacity.The effects thereby become subject to (indeterminate) interpretive exercise divorced from the material reality ('zeroes and ones') of the code as such, which is construed paradoxically as abstract calculation.This is the misrecognition problem, but now deepened to the point that the code cannot properly be recognized as a material agent, by virtue of serving as proxy for a representative abstraction.To make this clearer, let us sketch an example drawn from recent events.

The SyRI example
Consider the recent SyRI technology in the Netherlands. 33Developed by the Dutch Ministry of Social Affairs and Employment since 2014, SyRI is a system designed to link databases held by a multitude of otherwise separate agencies, to collect and sift information about Dutch citizens, for the purpose of producing risk warnings signaling potential fraud in individual applications for social services.SyRI was designed for end use by a variety of national agencies (e.g., the tax authority, the authority responsible for employment benefits, etc.) and municipalities.
Following the bifurcated approach, the technology will be understood as two things: an infrastructure and algorithmic system, applying mathematical operations to manipulate networked data; and also a legal system, mimicking (and to some extent surpassing) an attentive civil servant.The two functioning identities are associated with two sets of commands, and will be analyzed independently as well.The legal agent will be debated according to constitutional arguments and ethical debate; the application will be analyzed according to its efficacy relative to the ends associated with the command under constitutional and ethical debate.The normative character of the command remains the province of lawyers and judges, the computational efficacy the province of engineers and programmers, with all the misrecognition that division of labor entails.But the legal command, at its root, is an indeterminate one: even a foundational legal document, such as a national constitution, is a source of arguments and contestations over values, not self-evident answers; the relative stability of constitutional arguments and values is contingent on extralegal conditions, such as the institutional and economic mobilizations of material resources arrayed for and against any one interest at play in the discursive legal space.As a result, the other command element, associated with the computational device and the efficacy of the algorithmic system, remains contingent on an indeterminate (legal) variable and the further material conditions that stabilize its use in practice.
The SyRI technology was recently found contrary to Article 8 of the European Convention of Human Rights, which broadly protects the right to respect for private and family life, home and correspondence.Significantly, however, neither of the competing legal interests, privacy and social services administration, were definitively defined.The judgment of the court focused on a symmetrical and separate failure of definition, namely, with respect to the technology.The final judgment of the court was premised on the failure of the government to offer any meaningful explanation of the technology, let alone its limits. 34Because the technology remained opaque, the court decided the case without ultimately determining the scope of the privacy protection under Art.8. Let us point out two things about this judgment, one appreciative and one critical.Our appreciative comment is also in the interest of transparency: we support the decision, and acknowledge the work that constitutional arguments can do in the public institutional realm.But that work is limited, and future 33 NJCM et al. v.The State of The Netherlands (5/2/2020). 34Ibid, at paragraphs 6.49; 6.65; 6.89-6.90;6.94-6.95;6.100; (6.105-6.106).
cases will be less extreme than SyRI, in which the total failure to explain the technology undermined the government argument.And so our critical take: as similar technologies continue to populate and condition the public sphere, another mode of knowing and assessing the interplay of the law and technology will also be necessary to interrogate their governmental repercussions.
We mean to be clear about two things here.When we write appreciatively, we mean to flag our normative presuppositions.When we write critically, we do not mean to jettison traditional constitutional practice (such as produced victory in the SyRI case).But we mean to suggest that traditional constitutional practice is not enough.The fundamental weakness of the government's case in SyRI is emblematic of an underlying problem, one that we suggest is just as likely in the future to frustrate meaningful regulation of artificial intelligence, as to enable it: because the legal (and constitutional) mandate will never be a finally determinate one, because it will always be the site of conflicting arguments, the technology that is adapted to its use (under the bifurcated way of knowing it) will not be finally explainable either, but subject to interpretation and speculation.The problem is further exacerbated by the character of complex software systems that are distributed, multi-layered, and dynamic. 35Our methodology, for these reasons, is attuned to the dynamic conditions of interactions that constitute the assemblage.Accordingly, rather than focusing attention on the ideal norms applicable to AI generally, we focus attention instead on the assemblage and the interactions that constitute its viability and operation, to expand material opportunities for intervention (whether by public agencies, or to contest private ones).Consider in this light two vanguard examples.Gavin Sullivan intervenes in the governmental practice of blacklisting (which includes no-fly lists, lists that freeze the assets of individuals and entities, etc.), first by tracing the assemblage, including the networked interactions by which algorithmic decision systems populate the lists in the first place. 36Sullivan's work is part of ongoing deliberation (both within and without public institutions) of such governance practices in fields of transnational security.Another example is the work of Sara Kendall, who in one study looks to labor-management relations as a vector for intervention in the governance of AI systems incorporated into weapons and surveillance assemblages developed by and for state security apparatuses. 37s the examples from Sullivan and Kendall should make clear, the indeterminacy thesis does not mean that the law has no force or effect, any more than it means that law cannot be stabilized.Rather, just as legal interpretations will be stabilized by extra-legal factors (institutional, economic, etc.), the force or effect of legal practice will likewise be contingent on material factors sometimes not recognized as proper to the essence of the law.Equally so, while the indeterminate nature of law frustrates the bifurcated identity of the conjoined technology, the latter does not lose all effectivity as a result.Internet regulation, for instance, will have no determinate 'truth', but will be contingent on countless institutional, material and technical arrangements (exhibiting divergent economic and political interests, among other things), arrangements specific to the various places in which there are sufficient ways and means (and interests) to engage in internet regulation in the first place.These arrangements will not necessarily -or, stronger, not likely -be coincident with optimal regulatory arrangements tailored to the material opportunities and hazards of new technologies as understood by their engineers and technologists.The problem, however, is not simply that the regulators do not 'get' the technology.The problem is also that the regulators represent a law that cannot properly correlate to the technology in the first place, because 'the law', so conceived, lacks material capacity.

Rethinking materiality
We can go forward now from the proposition that law cannot be adequately known independently of the material circumstances in which it is applied, which are not mere proxies: material circumstances are constitutive of the law. 38To offer an analogy, think of the time that structures of each day: hours, minutes and seconds are nothing other than what networked clocks concretely produce (though they may also be wrongly taken to exist in some natural, autonomous sense). 39hat happens when one starts from this perspective?In one sense very little, in another sense quite a lot.We have pointed out in brief how the indeterminate nature of an immaterial law leaves the field of regulation open to a host of diverse arrangements -institutional and economic, material and political, etc. -which frustrate the ability to assimilate law and technology, whether conceptually (as we illustrated with SyRI) or practically in any optimal sense (as with internet regulation).
From the perspective of our new starting point, very little in this image changes materially: regulatory practice continues to be generated out of these same arrangements.But what does change is that these arrangements are not merely interference between a better law and a waiting technology.We bridge the gap, from this perspective, between 'the law', these arrangements, and the technologies to which they apply.Recall the practical applications suggested in the introduction: rather than focusing on the perpetual disjuncture between an exogenous law and an intractable technology, which so often circles back to the familiar paradox that law lags behind tech- 38 We build here on recent developments in scholarship and method, for instance as advanced by Hyo Yoon Kang and Sara Kendall.See, eg, their special issue on legal materiality in Law Text Culture, vol.23 (2019); and their chapter 'Legal materiality' in S Stern, M Del Mar, and B Meyler, eds., The Oxford Handbook of Law and Humanitie s (OUP, 2019): 21-38. 39Cf, G Gordon, 'Engaging an infrastructure of time production with international law', London Review of International Law (forthcoming).
nology but also gets in the way, our perspective refocuses practical attention on the actual production of an endogenous law and the values that it comprises.Once the law is comprehensible in its material dimensions, the technology, too, can be fully materialized -i.e., also in its regulatory dimension .The artificial intelligence tasked with risk warnings is no longer bifurcated into a set of computational commands in service of another, exogenous (and not finally knowable) command (a command that in the specific example of a SyRI-like system must normatively encompass disparate issues of privacy, social welfare, security concerns, bureaucratic exigency, and more).It is an integral agency that materially conditions other agencies.Equally, it can also be contested or recalibrated as such, for example, by adapting which data are used for decision-making and tweaking the decision process, but also by asking how decisions are explained and what kind of redress is available for affected individuals, or, more radically, by intervening in the chain of relations among people and things that produce the AI in the first place.This shift in perspective holds that research must be directed away from idealistic doctrinal inquiry, towards the assemblage by and through and in connection with which the law is embodied in practice, and points to where the products of legal research are best applied, in the manner of intervention.By making visible the actual, material constitution of legal practice and regulatory effect, the locations suited to intervention become clearer.Deliberating the 'better norm' no longer holds as the activity best suited to comprehend regulation of or by technology such as algorithmic decision systems and how to intervene normatively in their development and use.Rather, the deliberation focuses on the normative character of the assemblage itself, to ask what socio-material relations constitute the assemblage by which the AI is developed, deployed and maintained.While the SyRI case successfully observed a bright line rule, we suggest that bright line may obscure more than it enlightens over time.Because as the governance space continues to become populated with increasing numbers of AI systems, the actual effects of AI interventions into value-defining and -distributing processes will enter into the equation of just what that bright line demarcates.AI technologies like SyRI, for example, change modes of accountability and bureaucratic embedding: the civil servant who made the decision previously is not simply 'replaced by a machine', but by a techno-bureaucratic construct, with the effect that the governmental assemblage -and the values that it supports -is significantly reorganized.
Deliberating regulatory norms does not cease to be part of the legal enterprise -legal practice remains discursive in nature, in which stated norms play an ineluctable coordinating function in a complex socio-political negotiation over material interests.But a coordinating function is not a selfsufficient one, and its actuation and normative effects cannot be achieved or properly understood outside of the material conditions of practice by which the norm is enacted and embodied. 40When the risk warning recognition system is not understood to approximate an indeterminate ideal, but to embody a concrete value system, a single investigation can explore the material condition of the law as it is constituted in and by the assemblage.Further, by going this route we can meet from the legal side the critical practice for AI sketched by Philip Agre in the late 1990 s, but still ripe for development today. 41We return to this possibility in the conclusion.From this perspective, law ceases to exist meaningfully in any immaterial way -but that does not mean that law ceases to exist.Law continues to exist as a category of embodied practices that constrain some interactions and privilege others.In this light, the norms and doctrinal terms that typically occupy legal analysis are necessary but not sufficient to comprehend what law is and does.Perhaps counterintuitively, seeing law in this way also entails a break with much socio-legal analysis, despite (or more accurately: developing out of) the work done in socio-legal analysis to diversify the comprehension of law.Because while socio-legal analysis has expanded the catalog of things, places and moments in which law can be found (eg, code), the scholarship has often done so by treating the entrants in that expanding catalog as vessels for an immaterial law that preexisted their adoption into the domain of legal scholarship.Or, as Alain Pottage puts it, law is interjected into 'the texture of a social life that is thereby anatomized ( sub silentio ) in such a way as to make it the medium (or, more accurately, the "context") of this expanding and ever-ramifying instance.' 42As a result, much socio-legal research has 'turned "law" into an abstract and generalized social instance, or a question that exists even before theoretical reflection gets under way.' 43 Instead, we ask with Pottage: whether we should mobilize the rich conceptual potential of theoretical reflections on materiality simply to give substance to the assumption that there is such a thing as 'law'.Why not instead recruit the potentiality of 'materiality' to imagine material worlds that are not always already configured into law, science, politics, and so on, but which are, in something like the original sense of actor-network theory, as confused as the veining of the marbles of St Mark's Basilica, and which call for more productive and more adequate modes of analysis? 44 By this mode of analysis, law becomes a function or moment of the assemblage, rather than the other way around.As such, law ceases to exist in itself .It is no longer consigned to the unknowable Kantian object.In its place, we find a complex intermingling of material conditions embodied and performed in social life by people, things and technology.The point, however, is not a celebration of complexity: the point is to develop a research method suited to its analysis.
On this basis, we mean to find the values privileged or suppressed in conjunction with legal practice in dynamic conmeaning in the act of its deployment.texts.To do so, we propose to stop looking for something that already exists under the banner of law, and instead to look at the constraints and privileges opened up or afforded by material and social interactions including discursive but materially situated legal practice.In the context of algorithmic decision systems, we propose to bracket ethical deliberation of the better (abstract) norm to guide the technology (in the manner of law applied to the technology), instead to comprehend the norms generated by the assemblage in which the technology is developed and deployed.On that basis, a renewed ethical and political vocabulary for regulatory intervention is possible. 45ottage speculates that rigorous reflection on materiality might 'actually lead to the dissolution of law as a social instance'. 46We do not need to go that far.Networks of all kinds comprise legal practices among their component parts and internal dynamics.As a communicative practice that is materially situated and supported, legal practice in these networked operations functions to sustain the network as a discursive tool within it.As should be clear, we aim to elaborate what it means for legal practice to be materially situated and supported within a network, and in turn what sustaining the network entails in that context.As we discuss below, that role includes for legal practice the ability to facilitate network interactions with other function networks, such as when financial sector technologies and practices encounter technologies and practices related to the remediation of environmental harm.This is a systems theory observation, that law serves as a point of interface across differently oriented networks. 47In the same vein, we further observe legal practice broadly as an exercise to manage expectations across network interactions, restricting some activities and privileging others by coordinating horizons of material expectations among networked participants (including expectations of and among objects and things, such as circuits and switches). 48hese coordinating horizons function as material affordances .Originally introduced as opportunities-of or invitationto action perceived from an agent, 49 affordances can be seen as 'the range of functions and constraints that an object provides for, and places upon, structurally situated subjects.' 50Affordances enable (or disable) interactions, and invite (or disinvite) action.Mireille Hildebrandt has recently attended to the effect of media affordances on the communication of law, and we turn to her work in a moment.But we are also inter-45 Thus, we make our intervention roughly in keeping with the recognition by Aixenberg and van den Hoven of socio-technical gaps between the technology and calls for ethical development.E Aizenberg and J van den Hoven, 'Designing for Human Rights in AI ', arXiv preprint arXiv:2005.04949(2020).We also make our intervention with critical appraisals of mainstream ethics in mind.See, eg, AL Hoffmann, 'Terms of Inclusion: Data, Discourse, Violence', New Media & Society (forthcoming). 46Pottage , supra n. 26, at 180. 47 N Luhmann and F Kastner, Law as a social system , (Oxford University Press 2004). 48Ibid. 49J Gibson, The ecological approach to visual perception (Houghton Mifflin 1979). 50J Davis and J Chouinard, 'Theorizing Affordances: From Request to Refuse', Bulletin of Science, Technology & Society 36 (2016): 241-48.ested in legal performances as affordances (and not merely conditioned by them) within the assemblage.Legal practices as affordances produce the effect of law in their performance or embodiment.They contribute to the network assemblage as interactive component parts, and the reality of the law they produce derives precisely from their interoperation with other parts of the assemblage (including sometimes interaction with elements outside the networked assemblage). 51This does not mean, however, that networked legal practices (or any other networked activities) are not purposive.Every network activity combines an individual aim with a networked end, each translated into the other -the 'successful' network, the one that coheres and functions, is the one in which these translation exercises work in concerted fashion to facilitate the interactive energies of a multiple number of participants over time. 52The translation exercise, however, is an uncertain and, again, an indeterminate one.By virtue of the uncertainty in any given act of translation, and the variety of possible outcomes, the aggregate effect of the many translation exercises involved in a complex network exceeds the intentionality behind any one act.For this reason, the development of the network (including the legal practices embodied within it) is dynamic and indeterminate, not mechanistic.SyRI may be a set of broadly knowable algorithms or not, but its actual performance over time depends on a variety of dynamic elements, including humans and machines.

(Beyond) law as information
One meeting point of the discursive form of law and its material agency in wider socio-technical assemblages is as information.Mireille Hildebrandt has lately focused on law and information, under the banner of a formula similar to -but distinct from -'code is law': namely, law is (or law as) code. 53ithin the rubric of law as code, Hildebrandt takes the further step of comprehending law as information.Comprehending law as information opens up an important dimension in the material analysis of law.Following Hildebrandt's work, it draws attention to the media by which law and legal discourse is communicated, and specifically the affordances manifest in the various media deployed for law's reproduction and transmission. 54These observations are essential to a materialist understanding of law.Hildebrandt's observation of law as information, however, does not represent the whole of law's materiality, and her precise focus on law as textual information limits what we know about the values that the law coproduces.
Elsewhere, Philip Mirowski and Edward Nik-Khah have documented the pervasive adoption by neoliberal economists 51 A similar point can be made about technology, as the affordances it introduces are eventually determined by the possible interactions in the assemblage. 52M Callon, 'Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay', The sociological review 32.1 suppl (1984): 196-233. 53 Hildebrandt (2016), supra n. 14. 54 See also, M Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar 2015) at 47-56.
of the vocabulary of information. 55Viewing the stuff of economics as information has allowed the neoliberal school to affirm the operation of the market as a wide-reaching technology for social ordering, beyond political institutions and democratic participation.We note, this is not a critique of information theory per se, but a critical observation of its adoption in other disciplines. 56The observation by economists of economic exchange as information processes has supported an image of the market as a sort of calculative device that operates according to transcendent axioms, which, properly empowered, serves as optimal guide to public policy. 57Together with evolutionary theory -information's regular, equally axiomatic counterpart, when adopted by economics and other disciplines -the vocabulary of information has enabled a governmental discourse that simultaneously determines governance positions and insulates them from politics. 58Consequently, abstractions like the invisible hand and 'survival of the fittest', which operate discursively as market imaginaries of information processing, become axiomatic principles for governmental knowledge and institutional action.In this way, the turn to information theory has included the inscription of neoliberal economic axioms -and the values that they support -into the social domain. 59We raise this not as a strict argument against the use of information theory, but to underscore the need to interrogate the material basis also of the turn to information theory itself.When we interrogate the analytical and normative privileges incorporated with information theory, we expand the scope of our analysis.Hidebrandt's attention to law as information, for instance, leads directly to analysis of law as text, and the texts that Hildebrandt appears to have in mind are further limited to the formal artefacts used to communicate positive rules of law.This tight focus allows her to analyze the relative affordances of hardcopy and digital media for communicating positive rules of law.But such a text-based focus associates law closely with its representative rules and norms.Other textual resources, such as those ethnologists turn to for archival work, eschew this pre-given limitation on what may constitute law, and further point to dimensions of legal practices that go beyond formal rules and representations. 6055 P Mirowski and E Nik-Khah, The knowledge we have lost in information: the history of information in modern economics (Oxford University Press 2017); and P Mirowski, Machine dreams: Economics becomes a cyborg science (Cambridge University Press 2002). 56Though we might note an epistemological voraciousness to information theory that invites interrogation.When Fleur Johns analyzes the multinational corporate deal, for instance, she observes the relevance for legal practice of the places, performative roles and material implements that go into the variety of interactions among lawyers and clients that contribute to a completed deal. 61Similarly, an algorithmic decision system intervening in legal, political, and/or economic relations also includes myriad dynamic interactions among people and things in a variety of roles and places.Thus, we are interested in practices of law that have always exceeded the textual representations of formalized rules.And so we aim to comprehend law through a wider ensemble of material determinants.This is not to the exclusion of formal, textual rules, but not limited to them, either.Hildebrandt's take on law as information, by contrast, leads to a narrow delimitation of legal practice to rule of law ideals.From there, she can measure whether or not new and digital media exhibit a consistent connection with rule of law ideals.Note, however, how the turn to information again privileges axiomatic social ordering mechanisms, formerly in the form of the market, here in the Rule of Law.While we take Hildebrandt's celebration of rule of law ideals to represent a valid and important normative program, we further hold that still more rigorous analysis remains necessary to comprehend the actual values produced and propagated by legal practice even once the rule of law (or any other ideal) is affirmed as its normative polestar.In part, this different agenda may derive from different starting points and scales of analysis.Hildebrandt proceeds largely within domestic and comparative frames of reference, in which rule of law ideals represent widely-recognized constitutional goods.We start from a perspective of international governance, in which rule of law ideals have been associated under the guise of development policies with colonial, imperial and -perhaps most of all since the 1980 s -neoliberal value regimes. 62This returns us to the cautionary lesson in Mirowski's research: the turn to law-asinformation, for all of its merits, is not unproblematic.

Consequences in contested domains
What does all this portend for practice?As stated in the introduction, fuller investigation opens new perspectives on intervention, whether for contestation, regulation, or adjudication.Those opportunities will be specific to the assemblages in question.But we can also point to broad shifts that our methodology will entail, for instance by distinguishing the program we propose from ongoing programs predicated on representational ideals, such as rights and the rule of law. 63e are not arguing against rights-based and rule of law pro-  2011). 63Compare our proposal with that of another contribution to this symposium, by XXXX, entitled 'Averting Enfeeblement and Fostering Empowerment: Algorithmic Rights and the Right to Good Administration', which looks to shore up good administration in the grams; rather, we make the claim that these programs have their limits, that these limits do not represent the horizon of possible knowledge about regulatory futures, and that we can design new research programs, whether critical or complementary to existing legal research, to go beyond such limits.
Consider two examples: first, the general commitment to privacy norms; second, the critical concern with algorithmic bias, especially racial and gendered bias, met firstly with aspirations to fairness norms.Privacy norms have long been a dominant point of focus for governance research and policy with respect to digital and social media.In addressing the actual work done by privacy arguments and policies in the governance of big data applications, however, Fleur Johns has pointed to analysis from outside of the technological context, by Kendall Thomas, concerning the role of privacy norms in domestic constitutional law in the US, applied in the context of criminalized homosexual relationships. 64Thomas's argument holds that the embrace of privacy as a progressive value had the counterproductive effect of disempowering the persons who would be constitutionally protected by it. 65Thomas called instead for a more rigorously material analysis to reconstruct a progressive legal practice.Johns updated the argument to point up the limitations of an individualistic tool -traditional privacy rights -to govern concerns associated with exploiting big data.Johns has not been alone.Agniezka Leszczynski, Deborah Lupton and others have problematized 'traditional modes of understanding and regulating information privacy'. 66Critical observers have proposed reconstructing privacy models in ways that bear little resemblance to the notion of privacy that still dominates constitutional legal discourse; they propose models such as 'networked privacy', 'contextual privacy', and 'relational privacy'. 67As the several names suggest, the norm alone no longer suffices: each of these reconstructions emphasizes knowledge of material relations in which the notion of privacy will be valorized. 688 The possible endpoints can be far-reaching.In a similar vein, for example, Bannerman proposes to 'reconceptualize the self and privacy', thereby to determine new and post-human forms of regulation more adequate for the progressive ambitions not served by traditional privacy arguments.Bannerman (2019), supra n. 67, at 2188 and passim .
Moving on, the recent attention to biases manifest in artificial intelligence, and the desire to meet them with computational adjustments indexed to normative ideals such as fairness, presents a similar dynamic.Bias has been demonstrated and rightly decried in a host of applications, from facial recognition technologies to product delivery systems to advertisement allocations to biorhythmic technologies deployed to police migration at borders and in the determination of higher education options for students graduating from high school in the UK. 69Correcting the abuses of bias already perpetrated is necessary, but not sufficient.Any straightforward 'fix' suffers at least two shortcomings over the long term.It does little to address the actual distribution of values that underlies discriminatory practices in the first place.In a nutshell, the fix may modulate the effects of problems (of bias) rather than their underlying causes, for instance stopping the moment of bias in the execution of the AI, but not the distributive (or social in/justice) conditions that lead to that moment in the first place.Likewise, it does little to prevent emergent patterns of discrimination in future distributions of value.Relevant distributive conditions include pervasive marginalization in three crucial spaces, in the data set, among the AI trainers, and in the programming community, where people of color, women, the poor, and other groups are underrepresented.Computational correction here may enroll the marginalized community as a machine-readable object, but without account for the underlying conditions of marginalization, and likewise without redressing the concrete absence of agency in the actual development and deployment of the artificial intelligence system or component. 70Moreover, the presumption of a transcendent reference point which will allow the computational fix -call it the 'fairness point' -is flawed.Where exactly is such a fairness point located?Ultimately, in an abstraction.The abstraction has two dimensions: an aspiration to a neutral code, against a background of neutral social relations.But the code itself is always already an agent in social relations, so never agnostic among them, and thus neutrality has no concrete referent in the complex dynamics of those social relations.Rather, the code elides real world conditions by indexing to an abstract norm instead.
The failure of a perfect 'fairness point' does not make it impossible to identify bias, and progressive work in the name of fairness remains crucial, for instance to remediate for the work already done by biased programs.But future iterations of similar applications will continue to produce relative winners and losers.As Dimitri van den Meerssche makes clear, the end point is not a neutral or fair world, but a world constructed out of 'a distinct practice of discrimination that does not result from unintended bias, dirty data or system error, 69 Cf, D Kolkman, "F * * k the algorithm"?: What the world can learn from the UK's A-level grading fiasco (26/10/2020), available at https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fkthe-algorithm-what-the-world-can-learn-from-the-uks-a-levelgrading-fiasco/ . 70J Powles and H Nissenbaum, 'The Seductive Diversion of 'Solving' Bias in Artificial Intelligence', OneZero (7/12/2018), available at https://onezero.medium.com/the-seductive-diversionof-solving-bias-in-artificial-intelligence-890df5e5ef53 .but from the functional logic of … computational classifications [that] produce configurations of inequality -described as 'associative inequality' -with significant real-life effects'. 71onsequently, even after successful efforts to bring fairness to those victimized by biased artificial intelligence, core problems of discrimination by algorithmic processes remains. 72n this light, the presumption of traditional normative ideals will not be sufficient.Rather, rigorous investigation of sociotechnical assemblages and the relations that they comprise, including the emergent distributions of value that they reproduce, will be necessary to address patterned discrimination for governance purposes.

Encircling values in the assemblage
The most fundamental sites for investigation constitute qualitative values at work in the execution of decision-making processes.Value, as a category, presents a special case of constituent parts of an assemblage.The reality of values, like with law, does not precede the assemblage. 73This is true even for value understood as serving some apparent human need or desire: even physical human need will manifest one way in one assemblage and another way in another, and the sense of value associated with its satisfaction will vary with ithunger may manifest one way for workers on the factory floor, another way for spectators at a major sporting event, another way for persons trapped in a warzone.The debate about the incommensurability of different fairness criteria in automated decision-making systems 74 (partially) highlights this problematic.While the indeterminate character of values is not different from other elements in an assemblage, the category of value can refer to a particular function common to every assemblage: Values communicate fundamental but provisional situations, ends and purposes that drive enrollment in a given network.Such ends and purposes may be observed from at least two conjoined perspectives: one (micro) perspective includes the appreciation of value manifested by each agent individually in her or his or its association with the network; the other (macro) perspective expands to the wider assemblage, to account for what makes the possibility of a communicable value intelligible in the first place.The assemblage is the constellation that delineates spaces in which the individual and network(s) can locate and translate their material interests as values.The assemblage, however, is never fully determinate nor absolute (though hunger will exist on the factory floor, at the sporting event, and in the warzone, none of these three places will fully determine the valence of hunger there).It (the assemblage) does not exist outside of the social, meaning that it is constantly subject to relational dynamics and constraints, and it does not exist outside of time, meaning it is constantly subject to change.Just as each material moment is constituted by its situation in the assemblage, the assemblage itself is constituted by the overall relation of its constitutive parts.Each is mutually implicated in the possibility of the other.As a result, neither is final or fixed: neither transcends the dynamic balance of relations by which each will be knowable.Both will be ultimately indeterminate, but will exhibit consistency by virtue of more or less stable dynamics among constituent parts of the assemblage.In this condition, the knowable character of any one constituent part will be delineated by its points of interaction with other constituent parts.These points of interaction are where we direct research with the technique of encircling .We borrow encircling from recent work in security studies, where the research technique has been developed to deal with problems of secrecy.In the particular vein of research from which we borrow, secrecy is treated as 'a dynamic practice and a mode of power', but one which cannot be fully and finally identified or determined. 75As a result, the method 'is less focused on uncovering the kernel of the secret, than it is on analysing the mundane lifeworlds of security practices and practitioners that are powerfully structured through codes and rites of secrecy.' 76 The description applies as well to our project.Let us offer the example of the door again: observed in isolation, the door may not announce any particular value or value set, but its valence will vary if it is bordered on one side by a bedroom, a bathroom, a cleaning closet, a war room, a research and development lab, a public library, a lion's terrain at a zoo, etc.An encircling exercise will produce different insights into the valence of the door.Ultimately, we adopt a method for 'lateral, multipronged, creative [and] iterative' techniques, recursively and reflexively revisiting multiple passage points and sites of interaction, to map dynamic practices in the production and distribution of values. 77We note, in addition, that research techniques applied to secrecy are additionally helpful in the area of algorithmic decision systems insofar as aspects of the algorithms at issue may be inaccessible for a variety of reasons, such as trade secrets or assertions of confidentiality, as for instance was the situation in the SyRI case. 78ssessing the regulatory effects of assemblages that include complex AI systems can profit from combining the technique of encircling with the methodology of legal realism, the latter in the sense described by Duncan Kennedy in his seminal article, The Stakes of Law, or Hale and Foucault! . 79Speaking in terms of background rules, Kennedy describes a method that looks to the ways in which legal arrangements condition the horizon of expectations, to determine the possible outcomes for any given interaction. 80Those arrangements, and with 75 de Goede, Bosma, and Pallister-Wilkins (2019), supra n. 23, at 14. 76 Ibid. 77NJCM et al. v. The State of The Netherlands, supra n. 33, para. 6.65. 79 D Kennedy, 'The Stakes of Law, or Hale and Foucault!', Legal Studies Forum 15 (1991): 327. 80Ibid at 345. them the horizons of expectations that they establish, are the product of more or less negotiated struggles between competing, materially-grounded social interests. 81Thus the legal realist inquiry, by directing attention to the horizons of expectation for any given interaction and the competing social interests demarcating that horizon, brings together several elements of our methodological framework: the technique of encircling; the dimension of expectation management identified above with systems theory and affordances; and the interdisciplinary analysis of social and material interactions involved in the development and deployment of technology with regulatory consequences.Encircling in particular trains us on the specificity of these assembled conditions, directing us to concrete moments in which an action that is not determined in advance is conditioned by specific relations, routines and concrete circumstances.Encircling, however, remains a novel technique outside of its development in security studies, and so our adoption of it includes the aim over time to refine and systematize its use in practice for the purposes described here.

Conclusion: Renewing empiricism
The need for encircling and the recognition of the void that encircling demarcates together point up a paradoxical takeaway for a materialist inquiry: traditional empiricism is not enough.Positive factual observation is necessary but not sufficient to comprehend the realities of governance with law and (artificial intelligence) technology.Why not?For one thing, positive factual analysis determined by empirical means neglects the obstacle of indeterminacy: the encircled space (of value in our project) is stabilized at-and-by the positive passage points that encircle it, but the space itself remains open.Even more so, the constellation of encircling passage points exists as such precisely because the spaces they encircle defy positive closure .The positive empirical inquiry alone cannot adequately account for the constitutive absences intrinsic to its object.As a result, when the indeterminate nature of the encircled space is neglected, the material constellation reverts to the unknowable thing-in-itself, a self-contained object that returns the inquiry to the Kantian dilemma of the autonomous object.The positive inquiry becomes self-defeating: when it posits its own sufficiency, which entails closure of the positive datum to which it is directed, it makes that object of its analysis ultimately unreachable by any research method.Consider in this light the first line of defense for algorithmic systems that exhibit systemic bias in their application, whether with respect to awarding bail or flagging credit risks: that the problem lies not with the algorithm, because the math itself is not biased.The algorithm by this logic is sealed off from the social world, a perfect artifact in a compromised environment.Moreover, though the algorithm appears to be 'doing something', it is nonetheless stripped of agency, as though its effects are generated outside of itself.The research corrective is a reflexive one.The way that we propose to know the technology and the means by which we 81 Ibid at 328 and passim .know it matter.This reflexive dimension brings our methodology in line with the call for a critical practice of artificial intelligence formulated by Philip Agre. 82Agre explains that the mainstream of AI research and development is predicated on a singular question: does a proposed alternative work better?But, Agre makes clear, this begs a prior question, namely: what does it work for?To answer this question, we must investigate the actual work that the technology achieves, including (and especially) the ways in which it constitutes the driving problem in the act of providing a solution.To know what the technology works for demands that the normative and material dimensions of the assemblage be recognized as continuous and concrete .Continuity includes the way in which the underlying problem (to which the technology is applied) is constructed and perceived in the first place -and at this point, the crucial methodological point should be clear: like the law and value, the problem does not preexist the assemblage mobilized to address it.Further, the problem is (re)constituted in every moment in which it is at stake.Accordingly, our methodology calls for identifying and encircling all of the points at which the (social, legal) problem addressed by AI technologies are materially and performatively constituted by the same.
We suspect that our proposal, in the end, may not seem so foreign to practicing lawyers.Litigators, for instance, have long been creative about the sites of their interventions.To contest border violence, one might, for instance, relocate the problem, 82 P Agre, 'Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI', in G Bowker, et al, eds., Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work (Erlbaum  1997).
to target the makers of razor wire. 83Our proposal would systemize the intuition behind such ad hoc actions, and develop a conceptual roadmap for more thorough-going insight and intervention.But this remains only a proposal, which will require execution in multifaceted contexts, whether researchor practice-oriented.Ultimately, our ambition is an integral framework that allows us to investigate the ways in which AI technologies constitute the problems they address in the act of providing solutions, on the basis of values materially inscribed in their assemblages (rather than on the basis of ideals projected onto them).From there, we can establish a more grounded understanding of actual regulatory work done on the basis of that construction of the problem, endogenous to the technology and technical practices doing the actual governance work, and so produce a more comprehensive vision of possible governmental futures.
g. S Harnad, 'Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component', Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society , (1993) . 25L Lessig, Code and other laws of cyberspace (Basic Books 2019). 26A Pottage, 'The Materiality of What?', Journal of Law 35 P Dourish, 'Algorithms and their others: Algorithmic culture in context', Big Data & Society 3 (2016) 1-11. 36G Sullivan, The Law of the List: UN counterterrorism sanctions and the politics of global security law (Cambridge University Press, 2020). 37S Kendall, 'Law's Ends: On Algorithmic Warfare and Humanitarian Violence', in Liljefors, Noll and Steuer, eds., War and Algorithm (2019) 105-25.
57 P Mirowski, 'Markets come to bits: Evolution, computation and markomata in economic science', Journal of Economic Behavior & Organization 63.2 (2007): 209-242. 58E Nik-Khah and P Mirowski, 'The Ghosts of Hayek in Orthodox Microeconomics: Markets as Information Processors', in Markets (meson press and University of Minnesota Press 2019): 31-70. 59Nik-Khah and Mirowski's research finds cognate studies in the adoption of information theory in still other fields of governance, such as military science and NASA's world systems theory.For an overview, see D Steuer, 'Prolegomena to Any Future Attempt at Understanding Our Emerging World of War', in M Liljefors, G Noll, and D Steuer, eds., War and Algorithm (Rowman and Littlefied 2019) at 20-28. 60Cornelia Vismann, for instance, found law in the countless mundane files of lawyers and governmental figures.C Vismann Cf, L Boer and S Stolk, eds., Backstage practices of transnational law (Routledge, 2019).