Legal dispositionism and artificially-intelligent attributions

Abstract It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.


Introduction
It is conventionally argued that an artificially-intelligent (AI) system's actions and their potentially harmful consequences cannot easily be attributed to the system's developers or operators because the system acts autonomously. 1Nor can the system, which has no legal personality, be liable on its own account.Victims are left exposed to accountability gaps, 2 and thus AI disrupts law, 3 necessitating new models of legal analysis. 4In specific doctrinal areas, this question is typically framed as a 'missing person' problem: when, instead of humans, AI systems drive,5 contract,6 defame, 7 make art, 8 commit crimes 9 and, more broadly speaking, cause harm, 10 how should law respond? 11uestions of this form are beginning to reach the courts. 12Aiming to plug this perceived gap, in 2017 the European Parliament (EP) proposed a 'specific legal status' for AI 'so that at least the most sophisticated autonomous robots could be … electronic persons responsible for making good any damage they may cause'. 13But this resolution was strongly criticised by legal and technological experts as premised on 'an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities, and a robot perception distorted by Science-Fiction'. 14The proposal was promptly shelved, and a 2020 resolution would instead emphasise that electronic personality was unnecessary because 'all physical or virtual activities… driven by AI systems … are nearly always the result of someone building, deploying, or interfering with the systems'. 15This position is reflected in recent EU legal instruments including the draft AI Act which imposes regulatory obligations on providers, distributors, and users of certain AI systems. 16f course, AI technology did not become any less sophisticated between 2017 and today. 17The primary difference between the 2017 and 2020 resolutions lies in how each conceptualised AI systems.In 2017, they were intelligent, autonomous beings analogised to Prague's Golem and Frankenstein's Monster. 18In 2020, they were software units programmed by humans to act within pre-defined boundaries.This paper examines how these opposing AI conceptions animate legal debates surrounding fault and liability attributions for AI systems.Drawing upon psychological 'attribution theory', or the study of how 'ordinary people [attribute] causes and implications [to] the events they witness', 19 the paper contextualises the 'AI autonomy' frame above as one built on folk 'dispositionism', a welldocumented concept in attribution theory, and demonstrates how easily dispositional AI narratives can be manipulated to promote a desired legal conclusion.It then characterises recent proposals to focus on identifying human actors responsible for AI system behaviours as 'situationist' responses which view AI systems as what Hanson and Yosifon call 'situational characters'entities whose behaviours are driven more by external than internal forces. 20Reviewing the technical capabilities of contemporary AI systems, the paper argues that they are better understood through a situationist lens.Unlike human DNA, which forms part of our natural dispositions, today's AI systems' decisional processes are written, controlled, and continually re-written by human actors. 21ontextualising the legal AI discourse within attribution theory illuminates how future discourse and policy-making surrounding AI systems should proceed.Specifically, it reinforces proposals focusing less on AI systems themselves than on the eco-system of providers, distributors, and users around them.Conversely, arguments premised on framing AI systems as sentient, intelligent beings are put in doubt.More broadly, attribution theory provides a framework for identifying pivotal misconceptions underlying conventional arguments on the legal conceptualisation of AI systems.Dispositional versus situational narratives subtly shape the questions we ask, and the answers we give, on AI liability attributions.As with philosophy and computer science, 22 AI provides a backdrop against which 'normative structure[s] underlying our understanding of law' may be challenged and re-examined. 23Thus, the paper's broader significance, especially for scholars interested in more than law and (AI) technology per se, lies in revisiting the implications of attribution theory for law. 24he paper first introduces attribution theory and its legal implications.Next, it identifies how far dispositionism animates conventional legal AI discourse by reference to jurisprudence surrounding AI liability, personality, publications, and inventions.Third, it examines how contemporary AI systems operate and argues that they are better understood situationally.The paper concludes with an attribution-theory informed framework for analysing AI-related attributions.
Before proceeding, it should be clarified that this paper focuses on the AI systems in use and development today and says nothing about the attainability of, and potential legal issues around, 'strong AI'25 systems. 26Nonetheless, as the technology continues to develop, this work would form an important plank for understanding how the law should conceive of and respond to increasingly sophisticated AI systems.Further, this paper is primarily concerned with fault and liability attributions; issues and materials on AI ethics and governance will only be referenced briefly where relevant.2019) 82 Modern Law Review 425 at 426.See also G Samuel 'The challenge of artificial intelligence: can Roman law help us discover whether law is a system of rules?' (1991) 11 LS 24.24   This retraces to Lloyd-Bostock's observation that 'little attempt ha [d] been made to relate' the 'extensive body of literature in psychology on the attribution of causes and responsibility' to law: S Lloyd-Bostock 'The ordinary man, and the psychology of attributing causes and responsibility ' (1979) 42 MLR 143.This was partially a response to Hart and Honoré's discussion on common sense causality in HLA Hart and T Honoré Causation in the Law (Oxford: Clarendon Press, 2nd edn, 1985).Subsequent work on law and attribution theory have revolved around the latter's relevance to causality, especially in criminal law.See, for example, NJ Mullany 'Common sense causationan Australian view ' (1992)

Attribution theory, artificial intelligence, and law
First observe that the conventional 'missing person' frame oversimplifies.Law does not necessarily require specific action(s) to be taken by specific person(s).Instead, established rules of attribution are deployed to deem one's actions (or liability) as another's. 27These rules are usually premised on familiar doctrines such as control. 28Thus a company can be liable for employee wrongs, 29 a platform can publish user-created content, 30 a landlord can be responsible for a tenant's nuisance, 31 and an animal's keeper can be liable if it bites. 32The difficulty with AI is better thought of as a problem with applying these attribution rules in light of AI's apparent autonomy.Lloyd-Bostock distinguishes between attribution as 'a relatively unreflective … process of making sense of and getting about in the world' on one hand and a deliberate 'social act' where norm-violating events are to be explained on the other. 33AI systems challenge both kinds of attributions.For the former, the technology's complexity makes intuitive assessments of factual cause-and-effect in relation to AI systems difficult.For the latter, AI's ostensible independence from human control obfuscates assessments of whom their actions should be attributed to.As Pasquale notes, drawing clear lines of AI responsibility is difficult because 'both journalists and technologists can present Al as a technological development that exceeds the control or understanding of those developing it'. 34) The dispositionist default Insofar as the problem is one of attribution, it follows that law can draw important lessons from attribution theory.Attribution theorists would understand the missing person frame and our resulting search for new personalities to fault as a classic dispositionist response.Dispositionism models an agent's behaviour as primarily driven by the agent's internal calculusits personality, character traits, and preferences.35 'Good' and 'evil' are basic adjectives for moral dispositions, though law prefers more nuanced terms such as 'dishonest', 'careless', and 'reckless'.
Dispositionism offers an elegant mechanism for attributing moral blame and legal liability because assuming that internal nature drives external behaviour lets us infer the former from observing the latter.One who returns a dropped wallet does so because they are good and honest; one who keeps it is evil or dishonest.The wallet-keeper, having demonstrated a morally-suspect disposition, is then blameworthy.One can fairly be held liable for one's actions, and their consequences on the world, because these actions are by and large expressions, and reflections, of one's true nature. 36he roots of dispositionism have been traced to Western philosophy, 37 finding expression in Aristotelian conceptions of virtue,38 Cartesian notions of individual will, 39  contract theory. 40It also features in Western legal theory, 41 for instance in the 'will theory' of contracts 42 and the 'autonomy doctrine' for attribution. 43Thus legal fault is often premised on dispositionist notions of intention, control, and consent. 44The more dispositional the injurer's actions, and the less dispositional the victim's, the more we are likely to fault the former, and seek remedy for the latter. 45Dispositions are not reserved for natural persons; companies and organisations are commonly ascribed with personalities as well. 46) The situationist critique To situationists, however, dispositionism commits a 'fundamental attribution error', 47 being 'the error of ignoring situational factors and overconfidently assuming that distinctive behaviour or patterns of behaviour are due to an agent's distinctive character traits'.48 The situationist case, which finds support across psychology, moral philosophy, and law, 49 is premised on empirical evidence of human behaviour.One canonical example 50 is Milgram's obedience experiment, 51 where a surprising majority (65 per cent) of volunteers were willing to administer a full course of intense electric shocks (up to 450 volts) to unseen human 'learners' in another room, despite the latter's vigorous, albeit staged protests.52 Situationists attributed Milgram's results to the power of the volunteers' situation: the gradual shift from the innocuous to the potentially fatal, the experimenter's authority, and the confusing circumstances participants were thrust into.53 Because we are geared to 'see the actors and miss the stage', 54 these situational forces, though obvious in hindsight, were largely overlooked.
Situationists therefore argue that we assign more moral and legal weight to disposition than empirical truths about human behaviour suggest is warranted.Since 'our attributions of causation, responsibility, and blameand our assessments of knowledge, control, intentions, and motivesare not what we suppose they are', 55 insofar as law relies on dispositionist conceptions of these doctrines, it risks itself committing the fundamental attribution error.If anti-social behaviour is produced more by situation, and less by disposition, than commonly thought, then law's focus on correcting faulty dispositions cannot effectively deter bad behaviour; situational causes of such behaviour must be rectified.
To be sure, Milgram's experiments have been subjected to two waves of criticism arguing that they had been misrepresented and misinterpreted. 56Nonetheless, modern situationist work, while still referencing Milgram, rests on a broader evidential base. 57More importantly, social psychologists have shifted from 'strong situationism' towards 'interactionism'explaining behaviour as interactions between disposition and situation (though their explanatory shares unsurprisingly remain disputed). 58hus, the claim is not that situation alone drives behaviour, nor that situation is always completely missed. 59In extreme cases, such as the classic gun to the head, situation is prominent enough to be detected. 60This is consistent with how exculpatory situations such as duress, inevitable accident, and circumstantial reasonableness are not foreign to law.The argument, more precisely, is that law under-appreciates situation while over-prioritising disposition.Therefore, while situationism has its own critics, 61 this paper's thesis does not require one to unconditionally accept situationism nor categorically reject all of dispositionism.Rather, the former is advanced as a completing rather than competing account of AI systems.

(c) Disposition versus situation in law
Given attribution theory's implications for legal fault attributions, legal scholarship on attribution theory is surprisingly scarce, particularly in the context of AI systems. 62Situationism has primarily been applied in the context of criminal responsibility 63 and American tort law. 64Thus, before examining how attributional frames shape the AI discourse, an illustration with a classic English case is useful.
In Miller v Jackson, 65 the Millers claimed in nuisance against a cricket club for cricket balls repeatedly landing in the former's property.Holding against the Millers, Denning MR's dissent predictably framed the Millers dispositionally.They were 'newcomer[s] who [were] no lover[s] of cricket' and who specifically 'asked' the court to stop the sport. 66In this narrative, the Millers had moved themselves into their present position.Conversely, the cricket club had 'done their very best to be polite' 67 and did 'everything possible short of stopping playing cricket on the ground'. 68But the Millers 'remained unmoved'. 69he majority's Millers were cast differently.For Lane LJ, cricket balls had been landing dangerously in their property: one had 'just missed breaking the window of a room in which their (11 or 12 year old) son was seated'. 70To Cumming-Bruce LJ, cricket balls were 'falling like thunderbolts from the heavens'. 71The neighbouring Milners, and their nine-month-old infant, were also subject to this danger. 72In this narrative, the residents had merely sought to go about their daily lives, 'picking raspberries in the garden',73 but simply could not because of the situation they had been thrust into.All three judges heard the same evidence, but the narrative each side told differed in the precise manner attribution theory predicts. 74n this way, attribution theory yields descriptive, predictive, and prescriptive insights for law.Descriptively, injurers may be cast as actors who chose certain intended actions giving rise to harmful events; victims are vulnerable persons being moved by, rather than moving, those events, and often rely on the injurer's dispositional control. 75Predictively, the extent dispositional/situational narratives can be sustained for claimants/defendants provides an indication of how parties may argue, how judges may decide, and how those decisions may come to be justified.Prescriptively, situationism suggests that law should be cognisant of narrative manipulation.If our conclusions regarding concepts like volition and control turn on narrative framing, it is worth asking how reliable they are as tools for attributing fault.Notice that, to portray the Millers as situational characters, the majority dispositionise the cricket balls, describing them as 'thunderbolts' bearing down on the plaintiffs.Yet 'if ever there was an item that is moved more obviously by something other than its own volition, it is a ball'. 76hat then about those who struck the cricket balls to begin with?

Artificial intelligence as dispositional actors
If balls can be dispositionised to influence law, it is not surprising that AI systems, which appear to behave as humans do, could also be.Since lawyers are not typically trained in the technicalities of AI systems, 77 we naturally ascribe what Dennett calls 'intentionality' towards AI systems so as to explain and manage what we cannot otherwise comprehend. 78This section demonstrates how far AI dispositionism shapes legal discourse, in the process examining popular conceptions of AI alongside debates on AI liability, personality, publications, and inventions.

(a) Popular culture
In science fiction, AI systems typically present as sentient, embodied robots who reason, act, and want. 79Influenced by such imagery, popular culture tends to describe non-fictional AI systems as 'evil' 80 and 'biased', 81 imputing to them thoughts and emotions.In 2016, the chatbot 'Sophia' made headlines by answering, '[o]k.I will destroy humans' in response to a question from its creator David Hanson.One contemporary headline reported that a '[c]razy-eyed robot wants a familyand to destroy all humans'. 82id Sophia 'want' to do so, or was it merely programmed to reproduce these words?That is, did the answer stem from 'her' internal disposition, or were they simply coded as a set piece in the chatbot's software?AI experts preferred the latter, arguing that Sophia was a mere 'puppet' with neither free will nor autonomy. 83Its creators had deliberately cast the robot in a dispositional light as a 'publicity stunt' 84 and 'political choreography' to market the technology. 85This notwithstanding, Sophia remains an icon for modern AI technologies frequently covered by news outlets 86 and was in 2017 granted legal citizenship in Saudi Arabia. 87he dispositional AI narrative is not limited to sensationalist tabloids.By selectively prioritising quotes sourced from AI companies and deliberately drawing parallels between AI systems and humans, the general media constructs expectations of a 'pseudo-artificial general intelligence' that does not exist. 88n turn, this narrative shapes popular thinking around AI liability.In 2018, history's first pedestrian fatality linked to automated vehicles (AVs) occurred in the United States.One contemporaneous headline reported that a '[s]elf-driving Uber kill[ed] Arizona woman in first fatal crash involving pedestrian', 89 implying the primary culprit was the vehicle itself, not Uber the company, nor anyone else involved in the vehicle's development or use.A similar framing emerges from another headline, '[s]elf-driving Uber car that hit and killed woman did not recognise that pedestrians jaywalk'. 90) AI liability The law is not wholly determined by lay conceptions of liability.But it may not escape its influence either.The question AVs pose to law is conventionally framed in terms of a missing person problem: when AI replaces human drivers, whoif anyone -is liable for accidents?91 Notice how the idea of AI 'driving' begins to dispositionise the system: the main actor seems to be 'the AI' itself, but since AI systems are not legal persons, they cannot be liable despite being the perpetrator which dispositionism points towards.Thus, the European Commission has questioned the 'appropriateness' of traffic liability regimes which either 'rely on fault-based liability' or are 'conditional on the involvement of a driver'.92 More broadly, Chesterman calls this the 'problem of autonomy' which AI systems pose to law.93 Since the vehicle acted 'autonomously', it appears that no person, human or legal, can be faulted for the accident.The crux lies in how far AI driving systems (ADS) can properly be said to be 83 J Vincent 'Facebook's Head of AI really hates Sophia the Robot (and with good reason)' The Verge (18 January 2018), https://www.theverge.com/2018/1/18/16904742/sophia-the-robot-ai-real-fake-yann-lecun-criticism(last accessed 25 January 2023).autonomous. Chsterman notes that 'autonomy' requires the ADS to be 'capable of making decisions without input from the driver'; such a system would differ from mere 'automations' like cruise control.94 The line between automation and autonomy, however, is seldom clear.Most legal commentators adopt the Society of Automotive Engineers' (SAE) six levels of driving automation, found in a standards document indexed 'J3016'.95 First published in 2014, J3016 was substantially revised in 2016,  2018, and 2021.Since 2016, the standard has only used 'automation', even to refer to vehicles at the highest levels.The SAE deliberately avoided 'autonomy', arguing that the term could 'lead to confusion, misunderstanding, and diminished credibility' because: 96 in jurisprudence, autonomy refers to the capacity for self-governance.In this sense, also, 'autonomous' is a misnomer as applied to automated driving technology, because even the most advanced ADSs are not 'self-governing'.Rather, ADSs operate based on algorithms and otherwise obey the commands of users.
Because the engineers' definition of 'autonomy' only requires that a system 'ha[s] the ability and authority to make decisions independently and self-sufficiently', 97 it encapsulates a range of technologies, such as thermostats, 98 to which attributing legal autonomy would be strange.Legal commentaries have nonetheless continued to use the term. 99Beyond AVs, AI autonomy remains cited as a key challenge to existing liability regimes. 100mith thus identifies the 'inconsistent use of several key terms [relating to autonomous systems] within and across the legal, technical, and popular domains' as a source of 'potential and ultimately unnecessary confusion'. 101Indeed, the engineering literature itself, displays 'a profusion of concepts and terms related to autonomy' 102 and oscillates between conceptions of autonomy as self-governance (i.e. the primacy of internal control) and self-directedness (i.e.freedom from external control). 103herefore, the issue here is less a problem of autonomy than one with autonomy. 104Both the definition of autonomy and its application to identifying truly 'autonomous' systems are ambiguous105 and subjective.106Since 'automation' frames the system situationally, while 'autonomy' presupposes 94 Ibid, at   and implies disposition, the term one chooses, and the resultant analysis, could be driven by motivated reasoning. 107

(c) AI personality
The longstanding debate on whether AI systems should have legal personality, 108 was brought into focus by the 2017 EP resolution which proposed limited electronic personality for 'at least the most sophisticated autonomous robots'. 109The ensuing controversy plays out as attribution theory expects.The 2017 resolution demonstrated a classic, pop-culture informed tendency to dispositionise AI.It emphasised AI autonomy, referring to science fiction to make the point. 110The expert critique offered a situationist response: first noting that claims of AI autonomy are overblown, and second calling out stakeholders 'in the whole value chain who maintain or control' the AI system's risks. 111Echoing how AI personality was unnecessary, the 2020 resolution highlighted the situational forces underlying AI systemstheir behaviours 'are nearly always the result of someone building, deploying, or interfering with the systems'. 112I personality scholarship demonstrates similar tendencies.Proponents generally offer two types of arguments. 113First are arguments based on the inherent qualities of AI, including but not limited to autonomy, intelligence, and consciousness.For instance, Hubbard argues that, given the Lockean imperative that all humans should be treated equally because we all possess 'the same faculties', any AI system which possesses these faculties should likewise have a prima facie right to personhood. 114Second are instrumental arguments based on the extrinsic usefulness of AI personhood.For instance, Čerka and colleagues argue that establishing liability against AI developers is difficult under present laws because of the AI system's 'ability to make autonomous decisions, independently of the will of their developers, operators or producers'. 115Likewise, Koops and colleagues identify challenges with determining the applicable law and enforcing it with AI becoming 'increasingly autonomous'. 116Personality is proposed to bridge this 'accountability gap'. 117hile dispositionism directly underpins the inherent arguments, instrumental arguments implicitly build on it also: legal gaps asserted critically assume that AI autonomy precludes the operation of existing laws.Unsurprisingly, the case against personality essentially contests how far AI systems are autonomous or intelligent. 118The issue, once again, is whether AI systems are better understood dispositionally or situationally.

(d) AI publications
Courts considering when an algorithm's developers 'publish' defamatory material the algorithm produces have likewise reached opposite conclusions on similar facts in a manner which attribution theory predicts.Those holding that developers are not publishers typically highlight how there is 'no human input' in the results' production. 119'It has all been done by the web-crawling "robots"'; 120 the developer merely plays a 'passive' 121 role in facilitating the same.Conversely, courts holding that developers can be publishers stress that they intentionally designed, developed, and deployed the algorithm.Thus, Beach J in Trkulja v Google (No 5) held that 'Google Inc intended to publish the material that its automated systems produced, because that was what they were designed to do'. 122McDonald J, in a related case, highlighted 'the human input involved in the creation of the algorithm' and how the defamation was 'a direct consequence' of the search engine operating 'in the way in which it was intended to operate'. 123ore recently, in Defteros v Google LLC the Victorian Court of Appeal reiterated that Google's search engine was 'not a passive tool' but something 'designed by humans who work for Google to operate in the way it does'. 124This was reversed by a High Court of Australia majority who did not consider Google's role in communicating the defamatory material sufficiently active. 125The dissenting justices argued that, given how search engines operated, Google was more than a 'passive instrument' conveying information 126 and had 'intentionally' participated in communicating the material. 127very case in the Trkulja-Defteros litigation involved the same search engine and operator, but each court's reasoning on publication was shaped by whether they understood the algorithms and its creators dispositionally or situationally.Tracing the EP resolutions, if search companies are not to be liable for defamation, we might describe the content as generated by 'sophisticated', 'autonomous', and 'intelligent' robots.But if they are to be liable, we might emphasise how search outputs are always 'the result of someone building, deploying or interfering with the [algorithm]'. 128o be sure, outcome differences in these cases must also be explained by reference to key factual differences that in turn shaped how the complex law and policy considerations surrounding online defamation applied. 129For instance, in the Trkulja cases the search company had notice of the defamatory material; in Metropolitan and Bleyer they did not.The narrow point here is that the dispositional/situational framing of Google's search algorithms influences, although it may not wholly determine, judicial analysis on algorithmic publications.It is also remarkable that every court above was, regardless of how they reasoned, happy to base their framing of Google's algorithms on broad narrations, instead of specific technical details, of how those algorithms operate.

Legal Studies
network. 195'Memory' is particular type of neuron (i.e.computation) which feeds into itself such that previous computations influence subsequent ones more directly. 196These metaphors make the maths appear as if it has its own mind but neither entail nor imply that it does.As Cardozo CJ famously held, '[m]etaphors in law are to be narrowly watched, for starting as devices to liberate thought, they end often by enslaving it '. 197 Likewise, Calo notes that judges' 'selection of a metaphor or analogy for a new technology can determine legal outcomes' surrounding AI. 198 (c) Mathematical dispositions are not human dispositions Secondly, even if we wanted to dispositionise maths, maths does not think or act as we do.Whether an AI system's internal formulae are manually specified or statistically learned, its 'disposition' is entirely encapsulated in those formulae.Since these dispositions are mathematically expressed, they can also be mathematically explained.To illustrate, we might say that our recidivism predictor above 'prefers' those with no violent antecedents the most, since its formula weights that factor most.Moreover, these formulae are fixed after training, and only updated if the learning algorithm is run on new data.Thus the predictor's 'disposition' is stable and deterministic: the same inputs always produce the same outputs.By contrast, we cannot ascribe numbers to how the human mind weighs factors; these weights can and do change over time.
To be sure, much depends on the specific algorithm(s) used.For large NNs that compute billions of weights across millions of factors, unravelling how the system weights each factor can be prohibitively difficult.Even assuming an AI system's prediction algorithm is stable, inputs received in real-time deployment may be ephemeral, prompting split-second changes in the system's outputs.Such opacity indeed challenges fault and liability attributions where victims often need to prove specific software defects and identify person(s) at fault for those failures. 199While AI researchers have dedicated an entire sub-field towards AI explainability, 200 explanations created from those techniques are often not the kind law requires. 201pacity must, however, be distinguished from autonomy.An NN may perform ten billion computations and tweak its output ten times per microsecond, but maths writ large is still maths.If one linear regression is neither sentient nor (truly) autonomous, what changes, if anything, when one links together a (hundred) thousand regressions?Opacity does not imply autonomy, even assuming the converse holds.Our legal system is opaque to most laypersons, and the best lawyers often cannot predict how it will behave, but we do not say that it therefore acts autonomously and in a way which justifies legal personality, rights, and obligations.Crucially, unlike humans, today's AI systems cannot act beyond what they are programmed to do, even to fulfil their 'wants'. 202Our recidivism predictor may 'prefer' offenders with fewer violent antecedents, but it cannot, say, propose laws for reducing violent crime.Likewise, Sophia can only produce textual responses to textual prompts.'She' cannot take steps towards starting a family or destroying humans.This is not to say that AI systems have no 'autonomy' at all, only that the label attaches primarily in an engineering sense.Notably, this does not necessarily mean abandoning existing (tort) law entirely.Negligence standards, given their focus on circumstantial reasonableness, are compatible with situationist models of responsibility. 215Moreover, once any misconception of contemporary AI systems as sciencefictitious, autonomous thinking machines is avoided, existing (dispositional) doctrines generally have fewer problems encompassing AI systems.Recalling the Trkulja litigation, once we acknowledge that search algorithms merely produce results their programmers designed and built them for, search companies can be said to have intentionally published those results.It has also been argued that the doctrine of control, clarified for AVs, could be meaningfully applied towards determining AV liability. 216What needs to change is not existing laws per se, as conventionally asserted, but how the law conceptualises AI systems.
To illustrate, suppose a developer D creates an AI system S that Qs with legal consequence L. Assuming L is a harmful consequence, D would like to avoid being fixed with L and argues that S Q-ed autonomously, independent from D's control, intention, and design.The first step must be to ascertain S's technical nature, stripped of any dispositionist baggage Q presents in.While courts may not have the expertise to delve into technical complexities, those who claim their AI to be autonomous may fairly be expected to prove it.
Next, regardless of step one's outcome, deliberate attention should be paid to situational player(s) who shaped S's behaviour.This points first to D, but might also identify other stakeholders, for instance, if D sold S to operator O. Consistent with standard product liability principles, had O deployed S in an environment which D expressly warned against, O's risk contribution cannot be ignored.This step might therefore identify multiple attribution targets.
Selecting the 'right' target(s) from this list turns on specific laws and facts at play, but the relative contribution each target makes towards determining S's behaviour is a key consideration.If L is a legally divisible consequence like liability, L might be apportioned proportionately to harm/risk contribution.Indivisible obligations like contracts may be best attributed to the party who contributed the most.
To be clear, situationism's insights would be wasted if it were merely used to identify targets for conventional dispositionist analysis.Each stakeholder's contributions should ideally be assessed situationally as well.We should consider, for instance, actions taken by other stakeholders, the scientific state-of-the-art, and inputs received by AI systems from their deployed environments.This explains why commentaries adopting more technically accurate views of AI systems favour apportioning safety and compensatory obligations across multiple stakeholders. 217Such inquiries may, of course, be more complex and expensive than we are used to.Thus, situationism may support moving more radically towards no-fault systems financed by eco-systemic actors, 218 as well as policies targeting systemic change 219 (eg building AI literacy 220 ).However, these proposals fall beyond this paper's scope and are best explored in future work.

Conclusion
This paper situates legal debates on AI within the context of attribution theory and uses situationism in particular as a foil to highlight law's traditionally dispositionist tendencies and critique unquestioned AI dispositionism.Folk conceptions of AI permeate the conventional legal, regulatory, and judicial AI discourse, leading to the exact attributional errors that situationists have long criticised.This not only threatens the credibility of legal AI analyses; because dispositional AI narratives are easily manipulable, allowing them to shape legal outcomes is problematic.Overcoming AI dispositionism does not necessarily require total reform; recognising AI systems as situational characters, as recent legal instruments are beginning to do, is sufficient.Implementing this paradigm shift may be challenging, but the more we are interested in an account of AI based on fact rather than fiction, the more we should be willing to abandon fallacious AI anthropomorphisms and re-direct attention to the situational forces driving how today's AI systems 'think', 'act', and harm.

20J
Hanson and D Yosifon 'The situation: an introduction to the situational character, critical realism, power economics, and deep capture' (2003) 152 University of Pennsylvania Law Review 129.21 See also J Cobbe 'Administrative law and the machines of government: judicial review of automated public-sector decision-making' (2019) 39(4) LS 636 at 639. 22 See AM Turing 'Computing machinery and intelligence' (1950) 59 Mind 433; JR Searle 'Minds, brains, and programs' (1980) 3 Behavioral and Brain Sciences 417; BJ Copeland 'The Turing test*' (2000) 10 Minds and Machines 519.The debate persists, for instance, in R Manzotti and A Chella 'Good old-fashioned artificial consciousness and the intermediate level fallacy' (2018) 39 Frontiers in Robotics and AI 1; EM Bender and A Koller 'Climbing towards NLU: on meaning, form, and understanding in the age of data', Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Online: Association for Computational Linguistics, 2020).23 M Zalnieriute et al 'The rule of law and automation of government decision-making' (

40J
Hanson and D Yosifon 'The situational character: a critical realist perspective on the human animal' (2004) 93 Georgetown Law Journal 1 at 10-12.41 C Haney 'Making law modern: toward a contextual model of justice' (2002) 8 Psychology, Public Policy, and Law 3 at 5-6. 42 See generally AS Burrows 'The will theory of contract revived -Fried's "contract as promise"' (1985) 38 Current Legal Problems 141.43 G Williams 'Finis for novus actus?' (1989) 48 Cambridge Law Journal 391 at 393Ross, above n 19; Harman, above n 35; Hanson and Yosifon, above n 20; Ciurria, above n 39; Levy, above n 37; A Kaye 'Does situationist psychology have radical implications for criminal responsibility' (2008) 59 Alabama Law Review 611.50 Referenced in Hanson and Yosifon, above n 40, at 150-154; Hanson and McCann, above n 36, at 1362; M McKenna and B Warmke 'Does situationism threaten free will and moral responsibility?' (2017) 14 Journal of Moral Philosophy 698 at 703. 51 S Milgram 'Behavioral study of obedience' (1963) 67 The Journal of Abnormal and Social Psychology 371.52 For details on the experiment see Harman, above n 35.53 L Ross and RE Nisbett The Person and the Situation: Perspectives of Social Psychology (London: Pinter and Martin, 2nd edn, 2011) pp 63-66.54 Hanson and Yosifon, above n 20.55 Hanson and McCann, above n 36, at 1369.56 I Nicholson '"Torture at Yale": experimental subjects, laboratory torment and the "rehabilitation" of Milgram's "obedience to authority"' (2011) 21 Theory & Psychology 737; D Kaposi 'The second wave of critical engagement with Stanley

98'
The rights and wrongs of autonomous systems' (24 July 2021), https://www.saab.com/newsroom/stories/2021/july/therights-and-wrongs-of-autonomous-systems(last accessed 25 January 2023).99 See eg Chesterman, above n 93; Shavell, above n 91. 100 See eg European Commission 'Report on the safety and liability implications of artificial intelligence, the internet of things and robotics' COM (2020) 64 final; AI Liability Directive, above n 1. 101 BW Smith 'Lawyers and engineers should speak the same robot language' in R Calo et al Robot Law (Cheltenham: Edward Elgar, 2016) p 83. 102 J Sifakis 'Autonomous systemsan architectural characterization' in M Boreale et al (eds) Models, Languages, and Tools for Concurrent and Distributed Programming vol 11665 (Cham: Springer International Publishing, 2019).103 JM Bradshaw et al 'The seven deadly myths of "autonomous systems"' (2013) 28 IEEE Intelligent Systems 54.104 Alluded to in S Chesterman We, The Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge: Cambridge University Press, 2021) p 60.

200
See generally J Burrell 'How the machine "thinks": understanding opacity in machine learning algorithms' (2016) 3 Big Data & Society 1; B Mittelstadt et al 'Explaining explanations in AI' Proceedings of the Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery, 2019).201 C Reed et al 'Non-Asimov explanations: regulating AI through transparency' in L Colonna and S Greenstein (eds) Nordic Yearbook of Law and Informatics (Swedish Law and Informatics Research Institute, 2022).
12 Oxford Journal of Legal Studies 431; A Summers 'Common-sense causation in the law' (2018) 38 Oxford Journal of Legal Studies 793; A du Bois-Pedain 'Novus actus and beyond: attributing causal responsibility in the criminal courts' (2021) 80 Cambridge Law Journal S61.
212. been in J3016 since it was first amended in 2016 and remains in the latest version.See On-Road Automated Driving Committee Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (2016) p 26, https://www.sae.org/content/j3016_201609(last accessed 25 January 2023); On-Road Automated Driving Committee Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (2018) p 28, https://www.sae.org/content/j3016_201806(last accessed 25 January 2023); J3016 (2021), ibid, p 34.
96This has