Social Justice and Artificial Intelligence by Dr. Adnan Hadzi (University of Malta)

This paper discusses the argument that the adoption of artificial intelligence (AI) technologies benefits the powerful few, focussing on their own existential concerns. The paper will narrow down the analysis of the argument to jurisprudence (i.e. the philosophy of law), considering also the historical context. The paper will discuss the construction of the legal system through the lens of political involvement of what one may want to consider to be powerful elites. Before discussing these aspects the paper will clarify the notion of “powerful elites”. In doing so the paper will be demonstrating that it is difficult to prove that the adoption of AI technologies is undertaken in a way which mainly serves a powerful class in society. Nevertheless, analysing the culture around AI technologies with regard to the nature of law with a philosophical and sociological focus demonstrates a utilitarian and authoritarian trend in the adoption of AI technologies. The paper will conclude by proposing an alternative, some might say practically unattainable, approach to the current legal system by looking into restorative justice for AI crimes, and how the ethics of care could be applied to AI technologies.

in the future. The paper does not discuss current forms and applications of artificial intelligence, as, so far, there is no AI technology (Bostrom, 2014), which is selfconscious and self-aware, being able to deal with emotional and social intelligence.
It is a discussion around AI as a speculative hypothetical entity. One could then ask, if such a speculative self-conscious hardware/software system were created at what point could one talk of personhood? And what criteria could there be in order to say an AI system was capable of committing AI crimes?
In order to address AI crimes, the paper will start by outlining what might constitute personhood in discussing legal positivism and natural law. Concerning what constitutes AI crimes the paper uses the criteria given in King et al's paper

Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and
Solutions (King, Aggarwal, Taddeo, & Floridi, 2018), where King et al coin the term AI crime, mapping five areas in which AI might, in the foreseeable future, commit crimes, namely: • commerce, financial markets, and insolvency • harmful or dangerous drugs • offences against persons • sexual offences • theft and fraud, and forgery and personation Having those potential AI crimes in mind, the paper will discuss the construction of the legal system through the lens of political involvement of what one may want to consider to be powerful elites. Before discussing these aspects the paper will clarify the notion of "powerful elites". In doing so the paper will be demonstrating that it is difficult to prove that the adoption of AI technologies is undertaken in a way which mainly serves a powerful class in society. Nevertheless, analysing the culture around AI technologies with regard to the nature of law with a philosophical and sociological focus enables one to demonstrate a utilitarian and authoritarian trend in the adoption of AI technologies (Goodman, 2016;Haddadin, 2013;Hallevy, 2013;Pagallo, 2013).
Hadzi: Social Justice and Artificial Intelligence by Dr. Adnan Hadzi (University of Malta)

147
The paper will base the discussion around Crook's notion on "power elites" (2010), in Media Law and Ethics (Crook, 2009), and apply it to the discourse around artificial Intelligence and ethics. Following Crook the paper will introduce a discussion around power elites with the notions of legal positivism and natural law, as discussed in the academic fields of philosophy and sociology. The paper will then look, in a more detailed manner, into theories analysing the historical and social systematisation, or one may say disposition, of laws, and the impingement of neo-liberal (Parikh, 2017) tendencies upon the adoption of AI technologies. Pueyo demonstrates those tendencies with a thought experiment around superintelligence in a neoliberal scenario (Pueyo, 2018). In Puevo's thought experiment the system becomes techno-social-psychological with the progressive incorporation of decisionmaking algorithms and the increasing opacity of such algorithms (Danaher, 2016), with human thinking partly shaped by firms themselves (Galbraith, 2015).
The regulatory, self-governing potential of AI algorithms (Poole, 2018;Roio, 2018;Smith, 2018) and the justification by authority of the current adoption of AI technologies within civil society will be analysed next. The paper will propose an alternative, some might say practically unattainable, approach to the current legal system by looking into restorative justice for AI crimes (Cadwalladr, 2018), and how the ethics of care, through social contracts, could be applied to AI technologies.
In conclusion the paper will discuss affect (Olivier, 2012;Wilson, 2011) and humanised artificial intelligence with regards to the emotion of shame, when dealing with AI crimes.

Legal Positivism and Natural Law
In order to discuss AI in relation to personhood this paper follows the descriptive psychology method (Ossorio, 2013) of the paradigm case formulation (Jeffrey, 1990) developed by Ossorio (1995). Similar to how some animal rights activists call (Mountain, 2013) for certain animals to be recognised as non-human persons (Midgley, 2010), this paper speculates on the notion of AI as a non-human person being able to reflect on ethical concerns (Bergner, 2010;Laungani, 2002 Austin, on the other hand, suggests that the legal code is defined by a higher power, "God", to establish justice over society. For Austin the legal code is an obligation, a mandate to control society (Austin, 1998).
Hart goes on to discuss the social aspect of legal code and how society apprehends the enactment of such legal code (Hart, 1961). Hart argues that the legal code is a strategy, a manipulation of standards accepted by society. Contrary to Hart, Dworkin proposes for the legal code to allow for non-rule (Dworkin, 1986) standards reflecting ethical conventions of society. Dworkin discusses legislation as an assimilation of these conventions, where legislators do not define the legal code, but analyse the already existing conventions to derive conclusions, which then in turn define the legal code. Nevertheless, Dworkin fails to explain how those conventions come into being. Here for Kelsen (1967Kelsen ( , 2009) legal code is a product of the political, cultural and historical circumstances society finds itself in. For Kelsen the legal code is a standardising arrangement which defines how society should operate (Kelsen, 1991).
The paradigm case (Ossorio, 2013) allows for the potential AI as non-human persons (Putman, 1990;Schwartz, 1982 Whether as between one man and another, or between one man and a whole people, it would always be absurd to say: I hereby make a covenant with you which is wholly at your expense and wholly to my advantage" ([1762] 1968, p. 58).
"Man is born free; and everywhere he is in chains", begins Rousseau's work of political philosophy, The Social Contract (1968). Rousseau (Dart, 2005;Hampsher-Monk, 1992) aimed to understand why "a man would give up his natural freedoms and bind himself to the rule of a prince or a government" (Bragg, 2008). This question of political philosophy was widely discussed in the 17th and 18th centuries, as revolution was in the air all over Europe, particularly in France 1789. In the 18th century Rousseau published The Social Contract. Rousseau thought that there is a conflict between obedience and persons' freedom and argued that our natural freedom is our own will. Rousseau defined the social contract as a law 'written' by everybody (Roland, 1994). His argument was that if everybody was involved in making the laws they would only have to obey themselves and as such follow their free will. How could persons then create a common will? For Rousseau this would only have been possible in smaller communities through the practice of caring for each other and managing conflicts for the common good -ultimately through love.
In The Art of Loving Erich Fromm reminds us that "love is not a sentiment which can In a more critical approach to rationalized contracts, in The Sexual Contract Carole Pateman argues that "lying beneath the myth of the idealized contract, as described by Hobbes, Locke, and Rousseau, is a more fundamental contract concerning men's relationship to women" (Friend, 2004). Similarly, for Pateman, "[t]he story of the sexual contract reveals that there is good reason why 'the prostitute' is a female figure" (1988, p. 192). The feminist philosophers Annette Baier (1988,1995) and Virginia Held (1993Held ( , 2006  Our priorities are our users and free software. We will be guided by the needs of our users and the free software community. We will place their interests first in our priorities. We will support the needs of our users for operation in many different kinds of computing environments. We will not object to non-free works that are intended to be used on Debian systems, or attempt to charge a fee to people who create or use such works. We will allow others to create distributions containing both the Debian system and other works, without any fee from us. In furtherance of these goals, we will provide an integrated system of high-quality materials with no legal restrictions that would prevent such uses of the system. (2004) The like a promising scenario one also has to be critical, as these alternatives can be vulnerable to corruption.
One could support an Open Contract practice, and suggest that a feminist notion of 'restorative justice' (Christie, 1977a;Crook, 2009) might serve to judge Open Contracts, by applying the notions of solidarity and care as principles of judicial practice. However the concern is how to move from an abstract idea of open contracts to a concrete legislation which could enable a AI technology production that is not deemed antithetical, or oppositional to the current judicial system, by formulating a set of ground rules and protocols that will allow AI communities to function and prosper. One could argue that this can be done by defining the independent terms and conditions, namely free and open licenses. Social contracts and laws will eventually be defined for these dataspheres, but until then power elites will try to appropriate every piece of AI technology in accord with the old, non-efficacious, "IP legislation" (Electronic Frontier Foundation, 2009).
Nevertheless, in trying to evaluate the argument that the adoption of AI technologies is a process controlled by powerful elites who wield the law to their benefit, one also needs to discuss the notion of power elites. Chambliss and Seidman argue that powerful interests have shaped the writing of legal codes for a long time (1982). However, Chambliss and Seidman also state that legislation derives from a variety of interests, which are often in conflict with each other. One needs to extend the analysis not only to powerful elites, but one also needs to examine the notion of power itself, and the extent to which power shapes legislation, or, on the contrary, if it is legislation itself that controls power.
In an attempt to identify the source of legislation, Weber argues that legal code is powerfully interlinked with the economy. Weber goes on to argue that this link is the basis of capitalist society (Weber, 1978). Here one can refer back to Marx's idea of materialism and the influence of class society on legislation (Marx, 1990). For Marx legislation, legal code, is an outcome of the capitalist mode of production (Harris, 2018). Marx's ideas have been widely discussed with regards to the ideology behind the legal code. Nevertheless Marx's argumentation limits legal code to the notion of class domination.
Here Sumner extended on Marx's theories regarding legislation and ideology and discussed the legal code as an outcome of political and cultural discussions, based on the economic class domination (Sumner, 1979). Sumner expands the conception of the legal code not only as a product of the ruling class but also as bearing the imprint of other classes, including blue-collar workers, through culture and politics.
Sunmner argues that with the emergence of capitalist society, "the social relations of legal practice were transformed into commercial relations" (ibid: 51). However, Sumner does not discuss why parts of society are sidelined by legislation, and how capitalist society not only impacts on legislation, but also has its roots in the neoliberal writing of legal code.
To apprehend how ownership, property and intellectual rights became enshrined in legal code and adapted by society one can turn to Locke's theories (1993).
Locke argued that politicians ought to look after ownership rights and to support circumstances allowing for the growth of wealth (capital). Following Locke one can conclude that contemporary society is one in which politicians influence legislation in the interest of a powerful upper-class -a neo-liberal society. Still, one needs to ask, should this be the case, and should powerful elites have the authority over legal code, how legislation is enacted and maintained?

The Disciplinary Power of Artificial Intelligence
In order to discuss these questions one has to analyse the history of AI technologies leading to the kind of "humanised" AI system this paper posits. Already in the 50s Turing, the inventor of the Turing test (Moor, 2003), had stated that: We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to Those early AI technologies were a disembodied approach using high level logical and abstract symbols. By the end of the 80s researchers found that the disembodied approach was not even achieving low level tasks humans could easily perform (Brooks, 1999). During that period many researchers stopped working on AI technologies and systems, and the period is often referred to as the 'AI winter' (Crevier, 1993;Newquist, 1994).
Brooks then came forward with the proposition of 'Nouvelle AI' (Brooks, 1986), arguing that the old fashioned approach did not take into consideration motor skills and neural networks. Only by the end of the 90s did researchers develop statistical AI (Brooks, 1999)  optimization of their processes and their reactivity and sensitivity to environmental stimuli, and in situated human-machine interaction. The concept of multisensory integration should be extended to cover linguistic input and the complementary information combined from temporally coincident sensory impressions" (Paradowski, 2011).
With this historical analysis in mind one can discuss the paper's focus on power elites. Raz studied the procedures through which elites attain disciplinary power in society (Raz, 2009). Raz argues that the notion of the disciplinary power of elites in society is exchangeable with the disciplinary power of legislation and legal code. Raz explains that legal code is perceived by society as the custodian of public order. He further explains that by precluding objectionable actions, legislation directs society's activities in a manner appropriate to jurisprudence. Nevertheless, Raz did not demonstrate how legislation impacts on personal actions. This is where Foucault's theories on discipline and power come in. According to Foucault the disciplinary power of legislation leads to a self-discipline of individuals (Foucault, 1995). Foucault argues that the institutions of courts and judges motivate such a self-disciplining of individuals (Chen, 2017), and that self-disciplining rules serve "more and more as a norm" (Foucault, 1981, p. 144).
Foucault's theories are especially helpful in discussing how the "rule of truth" has disciplined civilisation and how power elites, as institutions, push through an adoption of AI technologies which seem to benefit mainly the upper-class. Discussions around truth, Foucault states, form legislation into something that "decides, transmits and itself extends upon the effects of power" (Foucault, 1986, p. 230). Foucault's theories help to explain how legislation, as an institution, is rolled out throughout society with very little resistance, or "proletarian counter-justice" (Foucault, 1980b, p. 34).
Foucault explains that this has made the justice system and legislation a for-profit system. With this understanding of legislation, and social justice, one does need to reflect further on Foucault's notion of how disciplinary power seeks to express its distributed nature in the modern state. Namely one has to analyse the distributed nature of those AI technologies, especially through networks and protocols, so that In Protocol, Galloway describes how these protocols changed the notion of power and how "control exists after decentralization" (2004, p. 81). Galloway argues that protocol has a close connection to both Deleuze's concept of ' control' and Foucault's concept of biopolitics (Foucault, 2008(Foucault, , pp. 1978(Foucault, -1979 by claiming that the key to perceiving protocol as power is to acknowledge that "protocol is an affective, aesthetic force that has control over life itself" (2004, p. 81). Galloway suggests (2004, p. 147) that it is important to discuss more than the technologies, and to look into the structures of control within technological systems, which also include underlying codes and protocols, in order to distinguish between methods that can support collective production, e.g. sharing of AI technologies within society, and those that put the AI technologies in the hands of the powerful few. Galloway's argument in the chapter Hacking (2004, p. 146) is that the existence of protocols "not only installs control into a terrain that on its surface appears actively to resist it", but goes on to create the highly controlled network environment. For Galloway hacking is "an index of protocological transformations taking place in the broader world of techno-culture." (2004, p. 157).
In order to be able to regulate networks and AI technologies, control and censorship mechanisms are introduced to networks by applying them to devices and Fitzpatrick expands on Foucault's theory, investigating the "symbiotic link between the rule of law and modern administration" (Fitzpatrick, 2002, p. 147).
Fitzpatrick states that legal code is not only a consequence of disciplinary power, but that it also legalises dubious scientific experiments. Here again one can make the link to ethical questionable advances with AI technologies. Legislation, or legal code, Fizpatrick argues, corrects "the disturbance of things in their course and reassert the nature of things" (ibid, p. 160). For Fitzpatrick legislation is not an all-embracing, comprehensive concept as argued by Dworkin (1986) andHart (1961), but rather legislation is defined by elites. For Fitzpatrick legislation "changes as society changes and it can even disappear when the social conditions that created it disappear or when they change into conditions antithetical to it" (Fitzpatrick, 2002, p. 6).
Furthermore, West (1993) suggests that the impact of disciplinary power through legislation on the belief system of individuals does not allow for an analytical, critical engagement by individuals with the issues at stake. Legislation is simply regarded as given. In relation to the disciplinary power of AI technologies, issues with privacy, defamation and intellectual property laws are not being questioned. Nevertheless, West's argument that all individuals adhere to equivalent morals is improbable.

AI technologies and Restorative Justice: The Ethics of Care
Having said this, the prospect could be raised that restorative justice might offer "a solution that could deliver more meaningful justice" (Crook, 2009, p. 310 happened. Restorative justice advocates compassion for the victim and offender, and a consciousness on the part of the offenders as to the repercussion of their crimes. Tocqueville argued for one to live in liberty, "it is necessary to submit to the inevitable evils which it engenders." (Tocqueville, 2004) One can argue that these evils are becoming more evident nowadays with the advance of AI technologies. For AI crimes punishment in the classical sense may seem to be adequate (Montti, 2018). Duff (2003) argues that using a punitive approach to punish offences educates the public. Okimoto and Wenzel (2010) refer to Durkheim's studies on the social function of punishment (Durkheim, 1960), serving to establish a societal awareness of what ought to be right or wrong. Christie (Christie, 1977b), however, criticises this form of execution of the law. He argues that, through conflict, there is the potential to discuss the rules given by law, allowing for a restorative process, rather than a process characterised by punishment and a strict following of rules. Christie states that those suffering most from crimes are suffering twice, as although it is the offenders being put on trial, the victims have very little say in courtroom hearings where mainly lawyers argue with one-another. It basically boils down to guilty or not guilty, and no discussion in between. Christie argues that running restorative conferencing sessions helps both sides to come to terms with what happened. The victims of AI crimes would not only be placed in front of a court, but also be offered engagement in the process of seeking justice and restoration.
Restorative justice might support victims of AI crimes better than the punitive legal system, as it allows for the sufferers of AI crimes to be heard in a personalised way, which could be adopted to the needs of the victims (and offenders). As victims and offenders represent themselves in restorative conferencing sessions, these become much more affordable (Braithwaite, 2003), meaning that the barrier to seeking justice due to the financial costs would be partly eliminated, allowing for poor parties to be able to contribute to the process of justice. This would benefit wider society and AI technologies would not only be defined by a powerful elite.
Restorative justice could hold the potential not only to discuss the AI crimes themselves, but also to get to the root of the problem and discuss the cause of an AI crime. For Braithwaite (1989) restorative justice makes re-offending harder.
In such a scenario, a future AI system capable of committing AI crimes would need to have a knowledge of ethics around the particular discourse of restorative justice. The implementation of AI technologies will lead to a discourse (Sample, 2018b) around who is responsible for actions taken by AI technologies. Even when considering clearly defined ethical guidelines, these might be difficult to implement (Conn, 2017), due to the pressure of competition AI systems find themselves in.
That said, this speculation is restricted to humanised artificial intelligence systems to be part of a restorative justice system, through the very human emotion of shame.
Without a clear understanding of shame (Rawnsley, 2018) it will be impossible to resolve AI crimes in a restorative manner. Thus one might want to think about a humanised, cyborgian (Haraway, 1985;Thompson, 2010) proposal of a symbiosis between humans and technology, along the lines of Kasparov's advanced chess (Hipp et al., 2011), as in advanced jurisprudence (Baggini, 2018), a legal system where human and machine work together on restoring justice, for social justice.