1 Introduction

The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions (Calverley, 2008; Coeckelbergh, 2012, 2014; Gunkel, 2012, 2018; Llorca-Albareda & Díaz-Cobacho, 2023; Mosakas, 2021; Müller, 2021). A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency in modern philosophy (Floridi & Sanders, 2004; Bostrom & Yudkowsky, 2018; Llorca Albareda, 2023). In the face of this scenario, questions arise such as what properties are necessary to have moral status (Gordon, 2021; Mosakas, 2020; Sullins, 2011); how can we access to these properties (Neely, 2014; Søraker, 2014; Sparrow, 2004); in what sense they are deployed in new artificial entities (Illies & Meijers, 2014); or how real are the forecasts announcing the future appearance of artificial entities as or more intelligent than human beings (Bostrom, 2014; Totschnig, 2019).

The novelty of the question about the moral status of AI, however, does not reside in a resignification of the debates that took place in animal ethics (Gerdes, 2016; Powers, 2013). Animal ethics was the first to propose that the moral consideration of an entity is determined by the possession of certain properties and not by its belonging to a particular species (Hursthouse, 2013; Singer, 1975/2009). However, this approach is based on the idea that to have moral status it is sufficient to be a moral patient, i.e., to have sentience and to be able to be harmed (Singer, 1979/2011). The possession of intellectual qualities associated with adult human beings becomes a secondary criterion that can confer a higher degree of moral status (DeGrazia, 2008; Warren, 1997), but that cannot determine the limits of the circle of moral consideration (Singer, 1983). AI, by contrast, poses a challenge to traditional conceptions of moral agency rather than moral patiency (Floridi & Sanders, 2004), since current technology is a long way from being able to construct sentient artificial beings (Véliz, 2021). The recent development of AI is leading to the view that humans are no longer the only entity capable of such intelligence and rationality as to deliberate morally (Nadeau, 2006).

However, both the discussions on the moral status of animals and those on the moral status of AI start from the following reasoning (Coeckelbergh, 2012):

  • P1. An individual X has moral status if and only if she possesses property Y.

  • P2. X can possess property Y.

  • C1: X can have moral status.Footnote 1

AI can have moral status if and only if it possesses the defining property of moral agency, whether this is consciousness (Himma, 2009), internal life (Nyholm, 2020), or intentionality (Powers, 2013). These properties are those that have traditionally characterized human beings. John Danaher (2019a) expresses this idea by pointing to the civilizational crisis that can result from the massive implementation and development of AI: what defines human beings, their capacity for agency, is undermined by new artificial entities, due to the fact that they will occupy the domains in which humans were previously able to exercise this capacity. However, the undermining of moral agency is presented in the literature on the moral status of AI in another sense, namely, the fact that artificial entities can acquire these kinds of properties calls into question the very definition of being human. The properties that humans exclusively possessed in the past are now shared by other types of entities (Tegmark, 2018). This is why it can be understood that the advent of AI has provoked the foreboding that we are at the gates of an anthropological crisis (Brey, 2014; Bryson & Kime, 2011). The great oppositions on which the idea of the human was sustained have been collapsing throughout history (Mazlish, 1993) and we are now witnessing the fall of the last of the exclusions, that which separates the objectual and artifactual world from the human world (Haraway, 1985/2006; Latour, 1993).

This article will argue that this is an alleged anthropological crisis. The debate on the moral status of AI starts from this crisis when it comes to articulating the theoretical positions that enter into the discussion: either new properties are proposed that account for a new definition of the human being or it is admitted that there are no significant differences between human beings and AI—in the case that the latter would have such properties. However, this debate departs from a questionable anthropological premise: human beings are defined by having property X. Discussions about new AI systems and their resemblances with human beings have set aside an array of anthropological models that had developed in parallel with the advance of technologies and that challenge the declared state of emergency of contemporary reality. It abruptly separates again the human being from technologies and abandons the questioning of the relation between the two and their capacity for hybridization (Latour, 1993; Verbeek, 2005). The task of the article will be to show, from history and philosophy of technology, that it is possible to conceive other ways of understanding the human being and its relation with technology. To this end, six anthropological models will be proposed based on three criteria of analysis: traditional anthropology, industrial anthropology, phenomenological anthropology, postphenomenological anthropology, symmetrical anthropology, and cyborg anthropology. Then, it will be argued that the history and philosophy of technology leads us to take into consideration the relational dimensions of anthropology and that this idea has very important effects on the debate about the moral status of AI.

2 Technological Anthropology

Arnold Gehlen argued that once human beings had displaced nature, their opposite, they would turn their gaze back on themselves (1988). The blurring of the barriers between nature and the human calls into question the construction mechanisms of the concept of humanity: its properties, constructed by virtue of the exclusion of something that is not itself, cease to differentiate it from its opposite (Haraway, 1985/2006). Therefore, when these demarcation criteria cease to function, she begins to wonder about herself, about the reasons by virtue of which she can be classified as a human being. And this is precisely what happens with AI: the artifactual world ceases to be that to which the human being is opposed and begins to question that which was understood as human.

This anthropological crisis reveals the underlying assumptions of the relation between human beings and technologies that encapsulate the debate on the moral status of AI. It is assumed that the human being is such because of a series of properties that she possesses. If these do not work, others must be proposed. If they remain inadequate, a more exhaustive catalog of properties must be provided. If they still remain inadequate, it must be admitted that there are no properties that allow us to meaningfully differentiate human beings from AI. At no point is it questioned that the human being should be characterized as an entity defined through certain properties (Coeckelbergh, 2012; Gunkel, 2012).

Nevertheless, philosophy of technology throughout the twentieth century, although it has lacked an exhaustive anthropological investigation, has provided an insight into the relations that can be maintained with technologies and how these relations generate fracture lines that reject the radical separation between the artifactual and the human world. Our aim in this section is to develop the various anthropological models that can be derived from philosophy of technology. Their rationale lies not only in the theoretical originality of the cited authors, but also in the type of concrete relations that occurred with the introduction of new technologies.Footnote 2

Where do these models come from? Many philosophers of technology have simply chosen to distinguish instrumentalist conceptions from substantivist ones (Feenberg, 1991; Mitcham, 1994/2022; Rapp, 2012). I consider, however, that this distinction is insufficient and that we need more analytical criteria. I will introduce the anthropological gradient and the concrete relations with technologies, in addition to the already noted conceptions regarding technology. Each of them can be considered as a philosophical dimension of technologies: ontological, epistemic, and practical. Once crossed with each other, we obtain six anthropological models (Table 1).

Table 1 The anthropological models derived from philosophy of technology. They have been obtained through three anthropological criteria: anthropological gradient (ontological dimension), conceptions regarding technology (epistemic dimension), and concrete relations with technology (practical dimension)

2.1 Three Anthropological Criteria

2.1.1 Anthropological Gradient

We will call the continuum of equivalence/difference between human beings and technologies the anthropological gradient. It refers to the ontological dimension of the relation between human beings and technologies, since it categorizes the type of reality of both entities, to what extent they are the same or different, and what type of (cor)relations can occur between them.

It is precisely this dimension that Peter-Paul Verbeek (2008a, 2011, pp. 139–152) claims against Don Ihde (1990). Verbeek surreptitiously introduces an anthropological question of the greatest relevance. One of the innovative intuitions of Ihde (1979, 1990) consists in transferring the phenomenological notion of intentionality to the use of technologies. "It is only through the transformation which the instrument effects, that features which may be noted to be genuinely emerge. Phenomenologically speaking, the instrument allows new noematic features to arise within the horizon of perceptual experience" (1979, p. 22). Human intentionality is profoundly affected by the mediations effected by technologies, introducing fundamental changes in the forms of perception. A blind man's cane (Merleau-Ponty, 1945/2013) or a dentist's dental exploratory probe (Ihde, 1979) are not mere prostheses, but determine the way the world is perceived. Verbeek argues that the figure of the cyborg challenges this notion of intentionality, since an instrument of partial use and clearly differentiable from the subject that makes use of it is not the same as an instrument that becomes part of a biological body. In the former, the relation is one of quasi-transparency (Ihde, 1990), while in the latter transparency is total (Verbeek, 2005).Footnote 3 Therefore, before determining the relation that the human being has with the world, it is necessary to determine the ontological continuity or discontinuity between the human being and technologies.Footnote 4

Assuming the importance of the new question introduced by Verbeek, I propose four types of ontology of the relation between the human being and technologies:

  • Human being ≠ technologies. These are two ontologically distinct entities with no strong connections between them.

  • Human being → technologies. The human being is the focus of intentionality, but she perceives reality mainly through technologies, namely, technologies condition the way he perceives reality.

  • Human being ⇆ technologies. The human being is no longer the focus of intentionality nor the ontologically prioritized entity in the relation with technologies. The relation is one of co-determination (Verbeek, 2005), that is, both are transformed in their reciprocal interaction.

  • Human being = technologies. The human being and technologies are ontologically indistinguishable, that is, there are no defining properties that distinguish one entity from the other.

2.1.2 Conceptions Regarding Technology

The most common classifications of philosophy of technology have tended to distinguish instrumentalist conceptions from substantivist ones. Both positions answer the following epistemological question: how should we understand technologies? According to Andrew Feenberg (1991, pp. 5–7), instrumentalism is based on four basic premises: (i) technologies are indifferent to the various purposes for which they can be used; (ii) technologies are indifferent to politics and in no case can the artifact be understood as a particular political construct; (iii) technologies can only take place in the light of knowledge of the universal laws of nature, so their character is neither particular nor contingent, but has a degree of generality that surpasses concrete contexts and individual intentions (Rapp, 2012); (iv) technologies, being products of the universality of science, can be applied to different fields without undergoing substantial modifications (Ellul, 2021). Martin Heidegger (1954/2013) was one of the first to stress that the instrumentalist view of technologies derives from conceiving technologies as applied science: human beings elaborate hypotheses and theories that are materially applied in the form of technologies.

However, both Heidegger and many other authors have argued that technologies have much more profound effects on human beings. Before we theorize about the world, we already have a practical relation with technologies.Footnote 5 This makes technologies have a much more substantive influence. Technologies incline, both perceptually and existentially (Verbeek, 2005, 2011), human beings in different ways, which makes it impossible to understand them as neutral objects and procedures (Coeckelbergh, 2022). They embody ways of life and values that affect us distinctively (Ihde, 1990).

The problem with the division between instrumentalism and substantivism lies in the fact that the specificities of the latter are not usually attended to. The classical philosophy of technology (Ellul, 2021; Heidegger, 1954/2013, 1927/2010; Mumford, 1934/2010) has understood the question of substantivity in transcendental terms, hence many later authors have subsumed their approaches under the transcendental philosophy of technology (Schuurman, 1980; Verbeek, 2005). The meaning of technologies in the classical positions resides in their conditions of possibility, that is, there is a Technology that possesses a sort of essential or primary structure that determines human behavior in a total way. Fatalistic views of technology that understand modern technology as a uniform and monolithic rationality can also be introduced in this position (Marcuse, 1964/2013; Weber, 1920/1993).

But not all authors who have defended the non-neutral character of technologies have opted for this type of approach. Many of them understand that technologies have profound effects on human beings, but that their influence is ambiguous and depends on certain characteristics of their inner workings (Ihde, 1979; Verbeek, 2005) and/or the social relations in which they are imbued (Feenberg, 1999; Winner, 1986/2010).Footnote 6 Albert Borgmann (1987) has fully grasped these differences and proposed a new category of analysis: pluralism.Footnote 7 This position highlights the ambiguity inherent to technologies and how they can embody different values. I will introduce, however, a further distinction, which distinguishes weak pluralism from strong pluralism. Weak pluralism argues that technologies are human constructs that, by virtue of the design that is intentionally developed, may embody some values or others (Friedman et al., 2013) while strong pluralism highlights the impossibility of completely controlling those inclinations of action and perception of technologies because of the diversity of contexts of use and the subtleties of their internal structure (Verbeek, 2008a, 2011).

2.1.3 Concrete Relations with Technologies

In the last quarter of the twentieth century we witnessed, within the philosophy of technology, an empirical turn (Achterhuis, 2001). Philosophers of technology stop asking themselves about the spirit of modern technology (Ellul, 2021) or about the mode of revelation of technologies (Heidegger, 1954/2013) and start analyzing the inner workings and use of concrete technologies.Footnote 8 This interest in particular technologies allows a much more exhaustive analysis of technological possibilities: it discloses aspects of technologies that remained hidden in generalist discourses on technical rationality. The main exponents of the empirical turn claim the need to capture the concrete and pragmatic aspects of the use of technologies in order to escape from the previous transcendentalism (Ihde, 2009).

One of the philosophers who has most emphasized the need to study in depth the concrete relations between human beings and technologies is Don Ihde. He uses the phenomenological method to account for the fact that the correlation between the object experienced (noema) and the act of experiencing (noesis) is produced through the use of technologies. Human beings, since primitive times, intentionally direct themselves towards the world by means of artifacts. Ihde therefore sets himself the task of showing to what extent experience is mediated by technologies and to what degree they convey our perceptions and interpretations. In Technology and the Lifeworld (1990),Footnote 9 he proposes four types of relations with technologies: embodiment relations, hermeneutic relations, alterity relations, and background relations. All of these must be understood within a continuum rather than discretely.Footnote 10

First, embodiment relations are those in which technology acts as an extension of bodily perception. The artifact is withdrawn from the user's attention and felt by the user as a part of her perceptual organs.Footnote 11 Thus, this type of relation is based on quasi-transparency, i.e., the object is withdrawn from direct perception, but, at the same time, it can never be completely equal to bare perception.Footnote 12 Ihde formulates them in these terms: (human—technology) → world.

Secondly, hermeneutic relations are those we maintain with artifacts that, despite not being extensions of our perception, refer to an aspect of the world that lies beyond them. Ihde (1990) gives the example of a thermometer, an object with which we confront directly, which we must interpret according to a certain language and which refers to an aspect of the world (temperature) that is outside itself. He uses the following formula: human → (technology—world).

Thirdly, alterity relations are those that we maintain with artifacts as Others: entities alien to each of us that do not refer to a reality external to themselves but constitute in themselves a mystery. These types of relations are never fully complete with technologies (Ihde, 1990) because there always remains a residue of their artifactuality, but, in spite of this, they can awaken in us emotions of love and friendship (Turkle, 2011). Ihde formulates it as follows: human → technology (- world).

Fourth, background relations are those that form the backdrop of direct relations between humans and technologies. We do not notice them in the course of our daily lives, but they form the stage on which the rest of the relations develop. The thermostat, with its associated functions and sounds, is a good example of technologies that can be placed in the background of human experience. Ihde the formula as follows: human (- technology—world).

2.2 Anthropological Models

2.2.1 Traditional Anthropology

Traditional anthropology starts from the Cartesian premise that there is a radical separation between the res cogitans and the res extensa. The human being is the mental substance endowed with reasoning, while everything that does not belong to this category is explained on purely mechanistic grounds. She can make use of tools and instruments as she pleases, even more so if she knows the laws of nature that mediate her actions. The ends and values of technologies depend entirely on human beings. This is why the ontological fracture between the two types of entities is total: human beings are capable of freedom and autonomy while instruments are nothing but an indeterminate mass to be shaped (Verbeek, 2011).

This makes technologies mere instruments that can be used for one purpose or another, since everything that acts as a mediator between the human being and her utility will be conceived as a technical instrument. Hence, for Aristotle (2004, 432a1) even the hand itself is conceived as the "instrument of instruments". This bodily analogy provides us with a very relevant key: technical instruments are understood under the paradigm of organ substitution and surpassing (Gehlen, 1988). Namely, the body is incapable of doing something under its own means and then it externalizes in the form of technique. This perspective derives from the concrete use of technical tools. Ihde (1979) already indicated that this is precisely the characteristic of embodiment relations, in which the relation of quasi-transparency makes the object mediate with hardly any difference to the bare perception of things. This particular type of relation to techniques constitutes a central feature of the pre-industrial era, since a unitary conception of technique was lacking (Dessauer, 1927). Techniques are understood as concrete tools used for certain activities and under certain conditions of use. Technique is neither a driving force of human development (Ellul, 2021, Mumford, 1934/2010) nor a mode of revelation (Heidegger, 1927/2010), but is understood as concrete and separate instruments that are used for the benefit of human beings.

2.2.2 Industrial Anthropology

Industrial anthropology takes this name because of the historical transformations that have taken place in technology (Mumford, 1934/2010). And by this we are not referring to the relevance of the machine within the contemporary technical framework,Footnote 13 but to the fact that technique has become a unitary force (Dessauer, 1927) and a form of rationality that is deployed in the different social spheres (Ellul, 2021). They are no longer a set of instruments that each man uses as tools to be used in certain activities directed towards an end. On the contrary, technology is an interweaving and inescapable context that shapes the curvature of contemporary societies (Ihde, 1990).

It is in this sense that background relations are claimed to be the fundamental aspect of this type of anthropology. The first philosophers of technology question the abstract and technified nature of life, its difference with respect to traditional techniques, and the degree to which technology determines humanity. Although they keep in mind embodiment relations, their main interest lies in accounting for the technification of life that occurred within the framework of the industrial societies of the twentieth century. However, the approaches to this phenomenon are not univocal and we can distinguish two typologies inspired by the difference proposed by Carl Mitcham (1994/2022) between philosophy of engineering and continental philosophy of technology.

On the one hand, philosophy of engineering pursues a philosophical explanation of the act of engineering creation (Mitcham, 1994/2022). They try to analyze how the pre-existing idea in the mind of the engineer is connected with the finished product after the process of technical transformation. The greatest exponent of this position is Friedrich Dessauer (1927). Dessauer understands technique in a unitary manner as a process that, although individualized in each creator, occurs universally. Each creator starts from an ideal form that, after going through the technical disquisitions, ends up materializing in the sensible world. He proposes a fourth Kantian critique, one that understands technology transcendentally: this act of creation unites us with the divine, since connects the ideal forms whose origin is unknow for us with the material forms experienced by our sensibility. Technologies, therefore, are understood as instruments that make it possible to establish a bridge between the divine and the human, although they lack autonomy.

On the other hand, continental philosophy of technology takes background relations in a problematic manner: technologies are forces that determine human activity and induce certain forms of behavior and understanding that escape truly human ends. Thus, it is a matter of analyzing what are the conditions of possibility of technologies and their implications with determining force in human doing (Verbeek, 2005). In this position we could find Jacques Ellul (2021), Lewis Mumford (1934/2010), and Martin Heidegger (1927/2010).

Both should be understood as transcendental approaches to technologies: what should be studied are not concrete technologies, but how these, unitarily, determine human behavior due to their essential features (Schuurman, 1980; Verbeek, 2005). However, the difference in their approaches lies in the fact that the former understands technologies as instruments and the latter substantively. This distinction makes it difficult to offer a univocal anthropological gradient. Philosophy of engineering maintains the ontological fracture of the previous typology, but the substantivist approach is more varied in its approaches. While the latter position argues that technologies are not neutral, this does not mean that humans and technologies are not different realities. Mumford and Ellul make efforts to establish major differences between one and the other. Heidegger, on the contrary, does outline a first phenomenological interpretation of technologies (Ihde, 1979; Verbeek, 2005).

2.2.3 Phenomenological Anthropology

Although Heidegger can be considered the first phenomenological approach to the problem of technology, it is not until Don Ihde that a phenomenological anthropology of technology can be traced. And this is mainly due to Ihde's empirical turn (Achterhuis, 2001): the phenomenological invariant is not to be sought in Technology, understood as a driving and unifying force that (un)conceals a certain conception of the world, but in the study of the concrete relations with technologies, which hardly show univocal but multistable relations (Ihde, 1979, 1990). Only in this way can it be comprehended that Husserlian intentionality is traversed by technologies. Although the focus of intentionality is on human beings, this concept cannot be well understood if we do not analyze to what extent each concrete technology simultaneously broadens and narrows the direct perception of reality. In this sense, human beings are no longer simply influenced by the historical current of technology, nor do they barely use it as an instrument for their ends; on the contrary, the way in which they experience the world is technological (Ihde, 1990). Once the technological character of human experience is recognized, it is impossible to conceive of humans in the same manner. Technology is no longer something that we use for our ends or an encompassing rationality, but a particular entity that completely transforms how do we perceive the world. The human being and technologies are ontologically distinct entities, but they are phenomenologically correlated as the latter mediates all human experiences.

The empirical nature of the phenomenological relation makes its conception regarding technologies completely distinct. Technologies are also not neutral, but in a different manner. First, because not all technologies transform human intentionality in the same way (Ihde, 1979; Verbeek, 2005). A blind man's cane, for example, modifies perception in a different manner than a microscope does. The former broadens tactile capabilities while the latter enhances vision. Both limit reality to certain aspects of perception—to the tactile or visually perceptible. Secondly, because these perceptual changes are not univocal in each of the specific technologies. This is explained by the fact that two levels of analysis can be distinguished: the microperceptual level and the macroperceptual level (Ihde, 1990). On the one hand, the microperceptual dimension refers to the internal structure of each technology, i.e., to what extent human perception is transformed by the specific technical operations and how these are adapted to the structures of perception. On the other hand, the macroperceptual dimension refers to the large interpretative frameworks at the social level that invite to understand the phenomenological relation under certain parameters (Feenberg, 1991, 1999). This is precisely what multistability consists of, namely, it explains why a reality can be seen in different ways depending on the interpretative frameworks from which one starts. This accounts for the intimate connection between the two dimensions: the microperceptual dimension informs and conditions the macroperceptual dimension and vice versa.

Thus, this anthropology leads to a pluralistic view of technologies, i.e., technologies can embody different values according to their internal structure and external context. Both strong pluralism and weak pluralism have a place here depending on whether technologies are seen as embodying human values that are previously designed (Friedman et al., 2013) or whether technologies are understood to mediate in ways that are distinct and impossible to capture in their entirety (Ihde, 1979, 1990). And this diversity of values can be understood through different concrete relations with technologies, by reason of the empirical turn that separates them from transcendentalism. Hence, Table 1 includes embodiment relations, hermeneutic relations, and background relations.

2.2.4 Postphenomenological Anthropology

The birth of the concept of postphenomenology must be framed in the intellectual trajectory of Don Ihde (Ihde, 2003; Selinger, 2006). The first time he explicitly used this term was in his Postphenomenology: Essays in the Postmodern Context (1993) although he had previously surreptitiously introduced it in Consequences of Phenomenology (1986). His aim was to overcome the shortcomings of classical phenomenology, mainly because of its aspiration to be the only true approach to reality, instead of using phenomenological tools to account for the diversity of ways of experiencing it technologically (Ihde, 1990).

Although Ihde was the initiator and the promoter of postphenomenology, he did not quite carry the innovations of this new position to their ultimate consequences. His anthropology still continued to have a unidirectional character, i.e., the human being is the focus of intentionality and technologies transform the way we perceive the experienced object (Verbeek, 2005). This shows in what sense Ihde places much emphasis on noesis and little on noema. Technologies seem to play a passive role with respect to human beings, that is, they do not seem to transform them in meaningful ways, carrying specific intentionalities that profoundly impact human modes of intentionality.

Peter-Paul Verbeek (2005) has succeeded in radicalizing the presuppositions of Ihde's postphenomenology and presenting a postphenomenological anthropology. The relation between human beings and technologies must be one of co-determination, i.e., technologies are an active component and shape the reality of human beings. Not only are human beings focus of intentionality and are directed to the world through technologies, but also technologies are directed to the world in a certain way. Thus, we no longer find ourselves with distinct realities that are connected through the unidirectional effects of human intentionality, but rather both entities, although distinct, shape each other reciprocally.

This forces us to conceive technologies in a pluralistic manner. They all contain values and can channel the experiences of human beings in certain ways. But not simply in ways that are ancillary to the object of intentionality; rather, they profoundly transform what human beings are and the ways in which they experience the world. This pluralism, therefore, is always strong, because, due to their active role, human beings are hardly behind their conditioning. Moreover, like the previous anthropological model, because of the concreteness of the analysis and its strong roots in Ihde's methodology, it is able to account for a wide range of concrete relations: embodiment, hermeneutic, and background.

2.2.5 Symmetrical Anthropology

One of the most important influences for Peter-Paul Verbeek is Bruno Latour. Although the former tries to fit phenomenology into the thought of the latter (Verbeek, 2005) Latour's rejection of this philosophical position is considerable:

The phenomenologists have the impression that they have gone further than Kant and Hegel and Marx, since they no longer attribute any essence either to pure subjects or to pure objects. They really have the impression that they are speaking only of a mediation that does not require any pole to hold fast. Yet like so many anxious modernizers, they no longer trace anything but a line between poles that are thus given the greatest importance. (Latour, 1993, p. 58)

Phenomenology is still anchored in the subject-object schema, which prevents it from fully grasping the ways in which technologies act and modify human environments. Humans and technologies are one and the same entity (Latour, 1992, Latour & Weibel, 2005), they are actants that are part of networks (Latour, 1999). They lack any property that anchors them as a certain entity. On the contrary, these properties are the result of their position within the networks, in which they configure and are configured by the rest of the actants. Speed bumps that restrict the speed of cars, for example, have a type of agency that profoundly affects the actions performed by human beings (Latour, 1999). Symmetrical anthropology, therefore, is one that does not freeze the image of the poles of human relations—subject/object, human/non-human, natural/social—, but rather accounts for how they come to be formed as such poles in their interactions through networks (Latour, 1993). The reality of which they are part is the same, it is the reality of networks (Latour, 1999).

The conception regarding technology that derives from Latour's position is a strong pluralism: all technologies configure and shape the rest of the entities with which they interact, in the same way that human beings do. It is, in fact, even stronger than Verbeek's, because the barriers placed to their interaction go beyond the subject-object schema. The consequences of this approach force us to rethink the concrete relations with technologies, since embodiment and hermeneutic relations are no longer adequate. Their adequacy depended on human perception being at the center of technological experience, either as an extension of direct perception or as an objectual reference to be interpreted. However, for Latour all are background relations, as these networks occur beyond the position occupied by human consciousness with respect to the objectual world (Latour, 1993).

2.2.6 Cyborg Anthropology

The possibilities offered by AI have completely altered our relations with technologies. Latour's insistence on hybrid entities, which could not be recognized as either human or non-human, has reached its maximum historical realization: whether they are technological extensions of human beings or whether they are entities with a silicon substrate that are capable of being as or more intelligent than human beings, hybrid individuals are beginning to become a reality. This forces us to rethink what the human being is, what intelligence is, or what the future of humanity will be. And these are precisely the questions I referred to at the beginning of the text, those that dispute what a human being is and who should fall into this category.

The debate on the moral status of AI has focused its efforts on asking this question. The fundamental intuition of these discussions is summarized in this sentence by Nick Bostrom and Eliezer Yudkowsky: "While it is generally agreed that present-day AI systems lack moral status, it is unclear exactly what attributes ground moral status" (2018, p. 61). This time we are no longer looking for attributes that separate humans from technologies, but, on the contrary, we are looking for technologies that are human enough not to be considered purely mechanical entities. Cyborg anthropology equates humans to AI technologies as long as they pass the established criteria. There is thus no longer an ontological separation between humans and technologies, but between different types of technologies.

The consequences of this are visible: technologies that lack moral consideration because they do not meet these properties become mere instruments, slaves to human ends (Bryson, 2010). However, those that do pass the threshold are fully human and thus should be treated in the same way as human beings: as alterities that deserve respect regardless of the plurality of actions they perform and beliefs they hold. Alterity relations were theorized by Don Ihde (1990) although he gave them little importance, conceiving it simply as one end of the continuum of human relations with technologies.Footnote 14 AI, however, poses a profound transformation: it can be possible for technologies to become fully an Other, not a machine designed and directed by another human being. This completely changes our relation with technologies and, at the same time, our conception of the human being.

3 Contesting Moral Status

3.1 The Relational Dimension of Technologies

The anthropological models presented above have not been randomly ordered. The order of appearance has an historical and a conceptual root. The first refers to the fact that any philosophy of technology is deeply dependent on the social context in which it arises (Rapp, 2012), so that the explicitness of the relations between human beings and technologies conditions the type of theories that are developed on these relations. The second refers to the fact that, as we advance in previously, anthropological models give progressively greater weight to the relations between human beings and technologies (Achterhuis, 2001). Except cyborg anthropology, these models are increasingly emphasizing the relation between humans and technologies, rather than radically separating them. In this section, I will show that both planes converge in their analyses and have parallel developments: the history of technology reports a socio-technical context in which the human and the technological are increasingly intertwined, while philosophy of technology pays greater attention to the actual and potential relations between humans and technologies. Awareness of this situation will force us to rethink the weight of the relational dimension of technologies in the debate on the moral status of AI.

The first dimension leads us to what I will call the historical argument. This type of argument has been used on numerous occasions in debates about moral status (Gunkel, 2012, 2018; Stone, 1972/2010) in order to account for the progressive process of inclusion that has taken place within the circle of moral consideration. More and more entities are part of it and the last barriers of exclusion are exhausting their containment forces (Haraway, 1985/2006). The moral progress that takes place throughout history will result in all morally considerable entities entering the margins of moral status. However, this argument has been widely criticized (Gerdes, 2016; Mosakas, 2020, 2021; Müller, 2021; Nyholm, 2020) due to its factual nature: that the circle of moral consideration has progressively expanded over time tells us nothing about that this expansion should be normatively defended. The plane of being is confused with the plane of ought to be. This confusion leads to turning moral status into the result of an arbitrary judgment that depends on the affinities and connections that those making the judgment have with the entity in question.

The use of the historical argument in our case has a different purpose. It will not be argued that because history has been running in a certain way, the course of events must follow the same trend. On the contrary, it will be argued that the history of technologies offers us two reasons why ignoring the relational dimension of technologies is a mistake. These reasons are based on Jacques Ellul's (2021) concept of enchainment of techniques and on Bruno Latour's (1993) concepts of purification and mixing.

First, Jacques Ellul understands the enchainment of techniques as the process in which modern technologies reveal their profound interconnection and interdependence. This is not to be confused with technical uniqueness and universalism, whose meaning, Ellul argues, derives from the uniformity of its principles and the equivalence of its manifestations; techniques are enchained because their functioning is only possible within the framework of the set of relations between particular techniques. Lewis Mumford (1934/2010) has defended a very similar idea with the passage from the paleotechnical era to the neotechnical era. While the former was characterized by an empire of disorder in which technical advances took place through individual and fragmentary efforts that fled from systematic knowledge, the latter is characterized by avoiding the automatic growth of technique by making use of a body of knowledge that emphasizes the interrelationship between different techniques. This means that contemporary technologies must be understood as a socio-technical system. A brief example will clarify this concept (Johnson & Noorman, 2014). If we think of a refrigerator, the first thing that will come to mind will be those properties or characteristics that differentiate it from other technologies: it cools and preserves food so that individuals can eat it in good conditions. However, a more thorough analysis will highlight that a refrigerator can only work if the set of technologies to which it is related work as they should. The refrigerator only works when it is connected to the power supply, which includes the proper functioning of the socket, the network of wires that carry it, and the power plant from which the energy comes. But the complexity of contemporary technologies highlights not only the interlinkage between them, but also the relations that must exist between them and human beings in order for them to function properly. For the refrigerator to fulfill its function, consumers must behave in certain ways, the technicians in charge of its installation must do so without errors, as must those in charge of the maintenance of the power plant. In this sense, the history of technology reveals not only the contemporary interlinking of technologies, but also their intimate connection with what human beings do.

This brings us, in the second place, to the concepts of purification and mixing, expounded by Bruno Latour. If contemporary technologies, because of their enormous complexity, must always be understood within the framework of a technical system, the role that human beings play in these relations between technologies must not be overlooked. And this has not taken place due to the particular disposition in which contemporary technology finds itself. Both Ellul (2021) and Mumford (1934/2010) emphasized that technique has been developed to such a high degree thanks to its new autonomy: technical products no longer depend on the individual genius of the producer but are to a greater extent products of the inertia of a compact technical system. The technical sphere is purified of those elements that are not technical in order to allow its better development. That is to say, the enormous development of the socio-technical system can only occur when human affairs cease to intervene in strictly technical criteria. This process is what Latour calls purification. Latour points out, however, that the purification processes of modern technology have given rise to a profound paradox: while the exercise of separation between human activities and technical activities has led to technologies becoming enormously complex, their gigantic development has shown that the technical system is always imbued with human affairs. A simple glance at the newspaper shows, according to Latour, how the enormous development of technology has led to a proliferation of mixtures and hybrids: entities that one does not know whether they should be treated technically or humanly, since they can be considered on both planes. Cyborgs, synthetic biology or highly intelligent artificial systems are examples of this type of entities that are human and technical at the same time.

Modern technology has highlighted, firstly, the necessary linkage between different techniques and, secondly, the progressively intimate relation between human beings and technologies followed by technological development. This leads us to raise a paradox that runs through all the debates on the moral status of AI.

The paradox of the relation between humans and technologies in the age of AI: At the historical moment in which humans and technologies are most deeply intertwined and whose relations are increasingly visible, the relation begins to lose weight in philosophical analysis with the emergence of highly intelligent artificial entities.

The new AI, by virtue of possessing a number of properties traditionally linked exclusively to human beings, begins to distort the image of human beings. The properties that have always characterized humanity can be possessed by another entity and, therefore, there are doubts as to whether these properties are really the ones that define them. However, the renewed interest in the defining properties of anthropology forgets the lessons of the history of technologies, which shows that technologies, on the one hand, are not mainly defined by themselves, but by the relations they maintain among themselves; and, on the other hand, that the relations that define them are not only technical but are also anchored in the relations maintained between human beings and technologies. And this is precisely the conceptual shift: contemporary philosophies of technology have taken these historical teachings and incorporated the relational dimension of technologies in their analyses, as shown in the previous section.

3.2 The Relational Dimension of Morality

The concept of moral status was born in the second half of the twentieth century within three debates in applied ethics: abortion, animal ethics and ecological ethics (Hursthouse, 2013). Its theoretical novelty consisted in raising the possibility that some non-human entities could be considered morally in their own right (Jaworska & Tannenbaum, 2023; Kamm, 2008). Animals, for example, should be morally considered because they possess certain properties that demand an impartial consideration of their interests (Singer, 1979/2011). Humans are no longer the only entity deserving of moral consideration because, in contrast to the ancient anthropocentric prejudices, moral consideration can only be founded on impartially determined properties that are not subject to membership of a certain species or collective (Singer, 1975/2009).

To understand that the moral consideration of an entity is always produced by the identification of a series of properties and characteristics that it possesses is problematic. David Gunkel (2012, 2018) and Mark Coeckelbergh (2012, 2014) argue that moral life usually works the other way around: we do not first identify properties of entities and then treat the latter according to whether or not they possess the former, but rather we first relate to entities and already then grant them certain properties. Many philosophers of technology have followed a similar reasoning to account for our relations to technologies. Martin Heidegger (1927/2010) argues that we do not use and treat tools according to the properties they possess, but that they simply are used pragmatically, and we relate to them according to certain horizons of understanding that we do not pay attention to. Don Ihde (1979) defends the same viewpoint and adduces that before we theorize scientifically about the world we already make use of instruments and tools in our practical life.

The anthropological models presented in the preceding section show the need to take into consideration the relational dimensions of morality and technologies. If technologies are not simply isolated artifacts, endowed with a series of properties that define them, but are nodes in networks of relations in which they derive their meaning (Latour, 1999), and morality does not consist only in the identification of relevant moral properties, but these are constructed through the relations between entities (Coeckelbergh, 2012); then the moral relevance of technologies should pay special attention to the relations between humans and technologies and see in what ways these relations are moral. Let us show this through two examples.

The first was raised by Peter-Paul Verbeek (2008b, 2011): obstetric ultrasound. This technology gives prospective parents the possibility of being able to see the state of the fetus through a digitized image. It could be described through its technical properties: it is a technology that determines the presence or absence of pregnancy by means of ultrasound waves emitted by a transducer; all this is translated into a visual image seen by medical professionals and prospective parents. However, this is not all that can be said about obstetric ultrasound. The image of the fetus, at a size similar to that in the uterus, arouses affection and emotion in the parents-to-be. In addition, this technology is often used to identify pathologies, so that the fetus is beginning to be understood as an entity susceptible to be damaged. This technology, in this sense, introduces moral aspects in the perception of the fetus. The type of perception of this entity alters the way we understand it and has important moral consequences.

The second was exposed by Lucas D. Introna (2014, pp. 41–48): technologies linked to computerized writing. These technologies could be understood as a simple transposition of traditional modes of writing on computer devices. Their effects, however, produce profound changes in the way we write. First, the new forms of writing substantially change the ways in which the author relates to the text. Computerized writing tends toward rapid writing, susceptible to erasure and reworking, in contrast to the thoughtful and well-reflected traditional writing that requires its content and form to have been clearly defined. Secondly, the possibilities of plagiarism have multiplied in the face of the enormous number of writings and documents to which the author has access. This has led to the development of anti-plagiarism programs such as Turnitin that distinguish what is plagiarism from what is not. This is why these technologies, on the one hand, identify plagiarism—a phenomenon that was not previously well-defined and demarcated— and, on the other hand, present technological solutions that define how we should understand plagiarism. Other important effects are those that take place in authors writing in a non-native language, who use more sentences attached to basic structures and common vocabulary that are susceptible to being recognized as plagiarism; and the conversion of all writing into intellectual property once it is passed through anti-plagiarism programs. These are all relevant moral issues that arise from the relations we have with technologies.

Technologies thus enter into a relational dimension of morality that seems to go beyond the anthropology of properties. Human beings and technologies are not defined solely on the basis of certain characteristics, but must be understood in their mutual relation, the moral impact of which is of great relevance. The cyborg anthropology that has emerged with the new AI has brought with it a new flourishing of the importance of properties when both philosophy of technology and history of technology had extensively accounted for the significance of the role of relations in anthropology. The claim that can be drawn from both dimensions of technology, the conceptual and the historical, is that any anthropology that seeks to explore the definitions of human beings and technologies must necessarily consider the role of the relations between the two.

3.3 Coordinates for Rethinking Moral Status

In the previous two subsections, I have dealt with two issues. On the one hand, I have shown the sense in which technologies and human beings are to be understood relationally. Their intimate intertwinement undermines the traditional belief that they are two distinctly separate entities, each possessing its own distinctive properties. On the other hand, the relational dimension of morality has been expounded. Moral status locates moral value in the objective properties of entities. However, the relation between humans and technologies introduces moral aspects that can only be grasped relationally. This argument posits that, given these reasons, moral status can hardly continue to be understood in the same way. We need to include the relational dimension of anthropology and morality. The argument has stated that we need to include it, but not how we should do so. This subsection will try to offer the coordinates from which the concept of moral status should be reformulated.

It would seem that, if we conclude that anthropology and morality have a relational dimension, this implies that moral status must be entirely relational. However, this idea would be erroneous. We have not offered reasons to support that moral status must be comprehended solely and exclusively from a relational viewpoint. This type of reasoning is often common in some of the leading exponents of the relational turn in moral status (Coeckelbergh, 2014; Gunkel, 2012). In contrast, I will offer three arguments that undermine the idea that moral status can be entirely relational: the anthropological argument, the ontological argument, and the ethical argument.

Firstly, the anthropological argument challenges anthropologies devoid of any kind of properties. This idea is equivalent to what is understood by negative anthropology, i.e., that which determines the human being precisely by its indeterminacy. Two authors who embrace this conception are Heidegger (1927/2010) and Gehlen (1988). Heidegger places at the center of his anthropology the openness of the human being and Gehlen understands her as a cultural animal that, unlike other animals, lacks biological determination. However, this is problematic for two reasons. On the one hand, the indeterminacy of the human being may be sustained on other properties. It can be argued that without consciousness or rationality human beings could not be open or culturally malleable. Without them, the indeterminate relations that are not defined by animal properties could not occur. On the other hand, a negative property need not imply the absence of properties. A negative property can be equivalent to the properties that make it possible for the human being to be understood in a primarily relational manner.

Secondly, the ontological argument is a continuation of the previous one. For relations between entities to take place, entities have to be capable of entering into certain relations. This capacity is usually a consequence of the intrinsic properties of the entity. Let us return to the example of the speed bump in Latour (1999). The speed bump may constitute an inscription of a certain kind of morality that prescribes to drivers traffic rules. However, if the speed bump did not have physical properties of internal consistency and hardness, it could not enter into such relations. Thus, it seems that, at the ontological level, the relations that an entity can maintain are only possible because of certain intrinsic properties that it possesses.Footnote 15

Third, the ethical argument has been put forward by authors who have criticized the relational turn in moral status (Gerdes, 2016; Mosakas, 2020, 2021; Müller, 2021; Nyholm, 2020). Advocates of relationality call into question the existence of such properties and the possibility of accessing them. By denying the importance of properties, they conclude that any entity can have moral status depending on how we relate to it. And this is problematic for two reasons. On the one hand, it sets arbitrary boundaries. The innovation of moral status consists in giving objectivity to moral valuation. There are entities that have value beyond the relation we can maintain with them. Without important reasons, we can value pencils as if they were people and human beings as if they were pencils (Müller, 2021). On the other hand, it hides the preeminence of certain kinds of properties. In order to be able to determine which relations have value, we seem to presuppose certain phenomenological properties. So only whoever possesses these properties can claim the moral value of the entities to which she relates.

These three arguments suggest that we cannot abandon properties. While it has been argued that relations play a fundamental role, abandoning properties does not seem the best choice. Therefore, hybrid approaches to moral status appear to be the most appropriate. One of the reasons why these hybrid approaches have had little relevance in the literature derives from the little weight given to the analysis of the concept of property. An important distinction can be made. The properties associated with moral status correspond to what are understood as intrinsic properties, i.e., properties that depend only on the internal nature of the entity and are in no way the result of the relations that the entity may maintain. However, there is another typology: dispositional properties. These properties are usually associated with the capacity of an entity to respond in a certain way under certain conditions (Mumford, 2003). This implies two aspects. On the one hand, these properties are temporally variable, that is, they are modal because they only manifest themselves under certain conditions, as well as they can be gained depending on how the entity develops. On the other hand, they are structure—or environment—dependent properties. Latour's speed bump may be hard because of the physical conditions on Earth, but, if we were to put it on another planet, it may become a fragile entity (Smith, 1977). Both types of properties, considered together along with the emphasis we should give to relations, offer two possibilities for rethinking moral status.

First, we find intrinsic properties that are a condition of possibility for valuable relations. We can identify two typologies. On the one hand, properties that allow certain entities to maintain certain relations. An example of this could be friendship relationships. While some have argued that we can maintain friendly relationships with robots (Danaher, 2019b), these sorts of relations only seem possible if certain mental properties are possessed. Without them, the requirements of authenticity and reciprocity cannot be met (Nyholm, 2020). Thus, the possession of certain properties makes an entity able to be part of a relation.Footnote 16 On the other hand, properties that make it possible to give inherent value to other entities. This reasoning was developed by Korsgaard (1983) in the following terms. There are entities that we can value in themselves without themselves possessing intrinsic moral properties. We need rational or phenomenological moral capacities that establish the conditions by which an entity has value. That does not mean that these entities are their own source of value, but that they are valuable in themselves.

Secondly, we find relations that are a condition of possibility for valuable dispositional properties. Moore (1903/1976) put forward this idea through the concept of organic unity: the sum of the value of the individual properties of a relation does not constitute the total value of the relation. The total value of the relation determines the particular value of the parts.Footnote 17 Moore gives the example of aesthetic experience: the enjoyment of a work of art does not lie in the value of any of its parts. Both the mental state and the material object have hardly any value in isolation. Only through the combination of both is it possible to understand the value of aesthetic experience. Many properties therefore have different values depending on the relations in which they are embedded. This idea can also be raised at the ontological and anthropological level: many of the morally relevant properties, such as the attributes linked to character, are dispositional, depending on the particular history of the individual. As they become part of certain relationships, AI may acquire dispositional properties that gain value in the total framework of relations (Jecker et al., 2022).

4 Conclusion

The anthropology of properties plays a fundamental role in the debate on the moral status of AI. The agential capabilities that artificial entities are progressively acquiring are calling into question traditional anthropological definitions. This has led to a renewed interest in the concept of moral status with the aim of rethinking the properties by which we define human beings. This article has argued that the presuppositions of the anthropology of properties, on which the debate about the moral status of AI is built, contradict the anthropological legacy of philosophy and history of technology. From the various philosophies of technology that developed throughout the twentieth century, a set of anthropological models can be derived that show the importance of relation versus property in understanding humans and technologies. If the debate about the moral consideration of new artificial entities is to be enriched, the consistency of the concept of moral status and how we can include the important role of relations in our moral lives must be rethought. This does not depend entirely on the identification of certain properties from which the moral treatment of an entity is derived but is open to modifications and transformations that relations produce in the entities involved. Thus, the anthropological crisis on which the debate on moral status is based is not such. On the contrary, the anthropological models presented above show that a reformulation of the dominant conceptions of moral status is required, since the latter, from the perspective of the history and philosophy of technology, loses its validity. Its dependence on the anthropology of properties calls for a theoretical reformulation.

While this article has argued for the need to take into consideration the role of the relation in debates about the moral status of AI, it should be made clear that this does not mean a complete denial of the anthropology of properties. The final subsection of the article offers three arguments for rejecting the idea of an entirely relational concept of moral status. Relational critiques will have force if they are able to articulate and complexify hybrid approaches in which both properties and relations are given weight. Two possibilities have been offered: intrinsic properties that are a condition of possibility for valuable relations and relations that are a condition of possibility for valuable dispositional properties.