1 Introduction

At the time of writing, the world is in the grips of a global pandemic. One outcome of this is social isolation. Many people (particularly those from vulnerable groups) are shielding or in lockdown, and consequently have limited real-world access to human friends and family. Jecker (2020, ‘abstract’) argues that one solution to this issue would be “…to design robots to function as companions and friends for socially isolated and lonely older people during pandemic emergencies and in aging societies more generally.” On Jecker’s view, the concept of ‘robot friendship’ is increasingly important; it could be a vital lifeline for many. As Jecker (2020, ‘counterfeit companions’) states, human–robot friendships are positive in this way because “…they can protect our health and enrich our lives.”

Jecker’s above argument is timely and inventive and, if actualised, could have potentially revolutionary practical applications. From a philosophical perspective, however, Jecker’s view is vulnerable to both theoretical and ethical objections (Sect. 3). Theoretically, it is argued that we cannot be friends with robots because robots cannot meet the necessary conditions for ‘being a friend’. Ethically, human–robot friendships are supposedly wrong because they are deceptive, and also make it more likely that we will disrespect, exploit, or exclude other human beings.

This article aims to defend the idea of human–robot friendships from these objections. I argue that many of the supposedly necessary conditions for friendship (Sect. 3) are actually optional components of friendship. I then outline a degrees-of-friendship view, according to which we can have stronger or weaker friendships with x, depending on how many of the components our friendship with x meets. On this view, it is theoretically possible to have some degree of friendship with a robot (Sect. 4). In Sect. 5, I argue that, understood in this way, human–robot friendships are not ethically problematic. They are not deceptive, and they do not necessarily make it more likely that we will disrespect, exploit, or exclude other human beings.

2 Which Robots Could We Be Friends with?

…dating a robot with a human form may not be quite as absurd or weird as a romantic attachment to an iPhone or smart fridge, but it’s arguably on a continuum with them (Tasioulas, 2019, p. 68).

In the above, John Tasioulas highlights an important aspect of human–robot relationships: the type of robot that we are talking about matters. Jecker’s (2020) proposal—that we create robot friends for the socially isolated—has considerably more impact and appeal if, like her, we understand robot friends in terms of fairly advanced social robots, rather than smart fridges or robotic vacuums. For the remainder of this article, I will thus be focusing on friendships with social robots. Examples include SoftBank Robotics’ ‘Pepper’, and Blue Frog Robotics’ ‘Buddy’; both of whom can purportedly recognise and respond to human emotions and perform social interactions.Footnote 1

As Tasioulas (2019, p. 51) explains, UNESCO have recently defined robots “…as artificial beings with four characteristics: mobility…interactivity…communication…and autonomy, in the sense of an ability to ‘think’ for themselves, and make their own decisions to act upon the environment…” Within this general definition of robots, there are different types of robots (ranging from robotic toys and vacuum cleaners, to military robots and robotic surgeons). Social robots are a special class of robot that are designed to have interpersonal relations with human beings. Examples include sex robots, robot teachers, and robot priests.Footnote 2

Jecker (2020, ‘the proposal’) explains that, in order to fulfil this social role, social robots have a number of features that make interpersonal connections more likely. First, most social robots have a recognisable human or animal form, as this is supposed to encourage positive responses from human users.Footnote 3 Second, social robots are designed to perform and respond to basic social interactions (e.g. they can hold conversations, smile, make eye contact, etc.).Footnote 4 Finally, Jecker suggests that, in future developments, social robots will be able to safely touch others, and sense and respond to human touch.

The above emphasises the relative complexity of social robots. A social robot is more complex than a robotic vacuum. Whilst the robotic vacuum is mobile and interactive, and potentially has limited communication and autonomy, it does not look like a human or animal, and cannot perform or respond to basic social interactions.

Further, different social robots can have different levels of complexity. A social robot marketed as a toy may only perform pre-programmed basic social interactions (and these may be fairly rigid and incomplete). In contrast, a social robot used as a teacher or shop assistant may have a larger repertoire and be able to master more complicated, spontaneous social interactions. These different levels of complexity will become important in Sect. 4, where I will suggest that we can have degrees of human–robot friendship. For now, though, let us address the opposition by considering why many think that human–robot friendships are theoretically and ethically problematic.

3 The Opposition: We Cannot Be Friends with a Robot

Most philosophical discussions of friendships frame their discussions in terms of a broadly Aristotelian view of friendship. As both Danaher (2019, ‘Robots Can be Our Aristotelian Friends’) and Vallor (2012, p. 188) explain, there are different (sometimes divergent) ways in which the Aristotelian concept of friendship can be interpreted, but the general consensus is that some combination of the following are necessary conditions for friendshipFootnote 5:

Reciprocity: Both parties must acknowledge the existence of the friendship. There must also be mutual good will between the friends (i.e. it is not a friendship if one party wants to hurt or exploit the other party) (Danaher, 2019; Fröding & Peterson, 2012; Tistelgren, 2018; Vallor, 2012).

Empathy: It must be possible for friends to empathise with one another. An upshot of this is that friends must be relatively similar to one another (in terms of vulnerabilities, needs, etc.). This is because one must be able to know, understand, and relate to someone in order to empathise with them (Vallor, 2012; possibly Coeckelbergh, 2010).Footnote 6

Self-knowledge: The friends must help each other to grow, learn about themselves, and become their ‘best selves’. This is typically supposed to occur through mirroring—we see ourselves through our friend’s eyes and learn about our strengths, weaknesses, etc. (Fröding & Peterson, 2012; Tistelgren, 2018; Vallor, 2012).

Shared Life: Friends must engage in shared activities, across a number of different domains. Friends ought to enjoy doing these activities, at least in part, because the activity allows them to spend time with one another (Danaher, 2019; Fröding & Peterson, 2012; Munn, 2012; Tistelgren, 2018; Vallor, 2012).

Associative Duties: We are typically supposed to have special duties towards friends. We ought to have reasons to treat them well (and better than we would treat strangers), we ought to believe them, see the best in them, etc. (Arrell, 2014; Kawall, 2013; Munn, 2012; Tistelgren, 2018).

Affection and Well-Wishing: Friends ought to care for one another. It seems sensible to suppose that this can range from basic good will to actively wanting the best for a friend and striving to help them achieve this (discussed in Munn, 2012).

Love and Admiration, Based on Virtue: Fröding and Peterson (2012, p. 204) explain this condition as “…the admiration and love the friends feel for each other is based on the virtues they recognise in the other”. In other words, we love our friends because they are kind, charitable, and brave; not because they are cruel, uncharitable, or cowardly (discussed in Fröding & Peterson, 2012).Footnote 7

Honesty: In order to know someone as a friend, we must (largely) know the truth about them. Friends should not deceive one another, or try to gloss over the bad aspects of themselves (discussed in Danaher, 2019).

Equality: There should be basic social equality between friends. There should not be an imbalanced power relation (discussed in Danaher, 2019).

Many (but not all) philosophical accounts of human–robot friendships use the above to argue against the existence and/or morality of human–robot friendships.Footnote 8 Typically, two broad arguments are presented:

  1. (1)

    The theoretical argument. We cannot be friends with robots because robots do not meet the above necessary conditions for ‘being a friend’. For example, robots cannot reciprocate love, they are not empathetic, they cannot help us to gain self-knowledge, etc. (see Tistelgren, 2018. This argument is also outlined, and ultimately rejected, in Danaher, 2019).

  2. (2)

    The ethical argument. Human–robot friendships are wrong because:

    1. (i)

      They are deceptive (and this deception is inherently wrong). The relationships are deceptive because the robot is either deceiving the human into believing that it can meet the above necessary conditions (i.e. that it can be empathetic, etc.), or the human is deceiving themself into believing this.

    2. (ii)

      They make it more likely that we will disrespect, exploit, or exclude other human beings. This is because we might end up prioritising ‘perfect’ robots over ‘imperfect’ humans (the ethical arguments are outlined in Coeckelbergh, 2010; Danaher, 2019; Jecker, 2020; Turkle, 2011. They are also discussed in Tasioulas, 2019; Tistelgren, 2018).

I concede that the above arguments do seem to stand if we adopt the above broadly Aristotelian view of friendship, where there are strict, necessary conditions that a robot must meet in order to be our friend. In most (if not all) cases, robots will not currently meet all of the necessary conditions for friendship. In the next two sections, however, I will argue that we need not take this Aristotelian view, and that there are more flexible, contemporary ways of understanding friendship which make human–robot friendships both theoretically and ethically acceptable.

4 Rejecting the Theoretical Argument

The above theoretical argument depends on the idea that robots cannot be our friends because they do not meet the relevant necessary conditions for ‘being a friend’. This section will outline three ways in which we could attempt to reject this theoretical argument. The first two suggestions would allow us to retain most (if not all) of the standard Aristotelian view (above). I will outline these suggestions, and explain why they are problematic. The third suggestion is my degrees-of-friendship view. Whilst it does require us to adopt a non-standard view of friendship, I will argue that my view avoids the problems with existing attempts to reject the theoretical argument, and so should be favoured.

4.1 Suggestion 1: Remove Some of the Necessary Conditions for Friendship

The theoretical argument only works if the conditions outlined in Sect. 3 actually are necessary conditions for friendship. One way in which we could reject the theoretical argument would thus be to demonstrate that some of the conditions are not necessary for friendship. Provided that a robot could meet the remaining necessary conditions, we could theoretically be friends with a robot.

The underlying impetus for this view is nicely presented by Cocking and Kennett (2000). They argue that the ‘self-knowledge’ and ‘love and admiration, based on virtue’ conditions are overly moralistic, outdated, and do not reflect our actual experiences of friendship.Footnote 9 They state: “our friends are not morally or constitutively moral exemplars who thus inspire us to moral growth and improvement” (Cocking & Kennett, 2000, p. 296). In other words, most of us do not see our friends as ideally virtuous, nor do we aim to use our friendships to become better or more moral (though this may be an incidental consequence of friendship). On this view, it is thus possible to have friendships that do not involve self-knowledge, or love and admiration, based on virtue.Footnote 10

A similar argument could be made about the ‘equality’ condition. To modern sensibilities, this condition can seem outdated, and also has problematic undertones of discrimination (i.e. that you cannot befriend someone who is not your social equal). For many of us, lived experience suggests that friendship can flourish, despite inequalities between friends. For example, friends can differ in terms of gender, sexuality, race, social class, education level, and a number of other characteristics that could cause them to have different (and inequal) placings on some constructed social hierarchy. A good visual depiction of this is Hughes’ 1985 film The Breakfast Club, which famously examines the growing relationships between “…a brain, an athlete, a basket case, a princess, and a criminal” forced to spend time together during detention. The film depicts how the five characters become friends, through mutual good will and shared activities, in spite of their differences and the social inequalities between them.

There are two main difficulties with this first attempt to reject the theoretical argument. First, the above only potentially removes the ‘self-knowledge’, ‘love and admiration, based on virtue’, and ‘equality’ conditions. Questions can still remain as to whether robots can meet the remaining necessary conditions for friendship. For example, can we have associative duties towards a robot (and they to us)? Can robots be empathetic? And so on.

The second difficulty concerns the shift between saying that a condition is outdated or against modern sensibilities, and saying that it is not necessary for friendship. This is a very large argumentative leap to take, and one that could easily be defeated. For example, an opponent could argue that the self-knowledge condition is only outdated because of its moralistic undertones. It may be possible to revise the condition so that it has a more modern, secular flavour (e.g. by suggesting that friendships reflect who we are, rather than who we ought to strive to become). On this view, the outdated conditions are still necessary for friendship; they simply require modification.Footnote 11

4.2 Suggestion 2: Modernise the Necessary Conditions for Friendship

The above has similarities to Danaher’s (2019) response to the theoretical argument. Danaher mainly focuses on outlining why some robots could be our virtue friends (in ‘Robots Can be Our Aristotelian Friends’). Virtue friendship is the highest, and most authentic, form of friendship on Aristotle’s view, and Danaher interprets Aristotelian virtue friendship as having four necessary conditions: mutuality, authenticity, equality, and diversity.Footnote 12 Danaher aims to show that there are contemporary ways of understanding these conditions, and that robots either currently can, or soon will be able to, meet all four conditions.

He begins by focusing on the equality and diversity conditions. Danaher (in my view, correctly) argues that human–human friendships do not show perfect equality and diversity. We are not completely the same as our friends (in terms of capabilities and abilities), and we do not typically engage with them across every aspect of their lives. Given this, Danaher claims that friendships only require imperfect equality and diversity. Consequently, human–robot friendships ought to be understood as similarly only requiring imperfect equality and diversity. Danaher concludes that there are good reasons for supposing that human–robot friendships can (or soon will) meet these imperfect conditions:

Arguably, robots are already our imperfect equals (they are clearly better than us in some respects and inferior in others) and the degree of adaptability and mobility required for imperfect diversity is arguably already upon us (e.g. a drone robot companion could accompany us across pretty much any life experience) or not far away. Thus, it is not simply some technological dream to suggest that robots can (or will soon) satisfy the equality and diversity conditions (Danaher, 2019, ‘Robots Can be Our Aristotelian Friends’).

As presented above, Danaher’s argument is interesting, original, and potentially revolutionary (as it opens the door for viable human–robot friendships). However, it is also open to fairly substantial objections.Footnote 13 First, Danaher’s definitions of ‘imperfect equality’ and ‘imperfect diversity’ are questionable, as explained below.

As I understand it, Danaher views ‘imperfect equality’ in terms of inequalities in capacities and abilities.Footnote 14 On his view, we can accept that friends differ in these respects, but are otherwise largely equal. It is true that robots do differ from us in their capacities and abilities. However, they also (currently) differ from us in terms of their moral, legal, and social status, constitution, vulnerabilities, etc.Footnote 15 To me, these further differences emphasise notable inequalities between robots and humans, rather than ‘imperfect equality’.Footnote 16 To explain this further, consider two humans: Ann and Catherine. Ann has high-level moral, legal, and social status—she is viewed as a person and has the rights and social benefits that come from this. Catherine is not granted the same moral, legal, and social status as Ann. She has some lower status, perhaps because she is a member of some ostracised social group, or has some psychological condition (like dementia) that negatively affects her perceived moral and legal status. Consequently, Catherine lacks rights and benefits that Ann has, and she lacks these things because she is different to Ann. In this example, I do not think it is correct to say that Ann and Catherine have imperfect equality. It is more fitting to say that there is inequality between Ann and Catherine; Catherine is not treated as Ann’s equal. Using this example as a guide, we can make the same claim about robots and humans. There are currently some important differences between robots and humans that point to inequality, rather than imperfect equality. If this is so, then Danaher’s ‘imperfect equality’ does not (currently) suffice to show that robots can meet the equality condition. There are important differences, besides differences in capacities and abilities, that cause my robot friend to be inequal to me. To avoid this concern, Danaher would need to explain why the additional differences that I outline above (status, constitution, vulnerability) do not lead to inequality in human–robot friendships.

A similar concern can be levelled at Danaher’s ‘imperfect diversity’ condition. He suggests that this condition is met when a robot can show sufficient adaptability and mobility to “accompany us across pretty much any life experience.” Again, this is clear, original, and provocative. But it is also arguably too weak. To me, the diversity condition, even in its imperfect form, requires more than that x is able to accompany y to a variety of activities and events. Accompaniment is overly passive. We do not simply want a friend to also be present at the events we are present at; we want them to partake in the event, engage with us, and mutually enjoy the event. If this is so, then ‘imperfect diversity’ more naturally involves this active engagement across some shared activities and life events, rather than all shared activities and life events. This seems to be what Danaher (2019) actually intends to say. In an earlier discussion of imperfect diversity in human–human friendships, he argues: “I also rarely engage with, meet, or interact with them across the full range of their lives. I meet with them in certain contexts, and follow certain habits and routines” (‘Robots Can be Our Aristotelian Friends’). If this is what is meant by ‘imperfect diversity’, then it is presumably what imperfect diversity in human–robot friendships should also entail. This is problematic as (i) it is different to the adaptability definition above, and (ii) Danaher implicitly suggests that, when defined in this way, the imperfect diversity condition cannot be met by current robots. He explains that “…for the time being, robots will have narrow domains of competence. They will not be general intelligences, capable of interacting across a range of environments and sharing a rich panoply of experiences with us” (‘Robots Can be Our Aristotelian Friends’). Again, to avoid this concern, Danaher needs to clarify what imperfect diversity entails, and how a robot either can meet this condition now, or will be able to in the near future.

As detailed above, the first objection that can be levelled against Danaher’s position is that his definitions of ‘imperfect equality’ and ‘imperfect diversity’ are incomplete and ambiguous. A related concern is that his conditions for ‘imperfect equality’ and ‘imperfect diversity’ are too easy to satisfy (at least partly because the definitions of these conditions are too open). To see this, consider an iPhone. iPhones are more capable than us in some respects (e.g. their capacities for ‘remembering’ and organisation), and less in others (they cannot emote, create art, etc.). They also have a level of adaptability and mobility that enables them to accompany us to any life experience. On Danaher’s view, this seems to imply that iPhones can meet the ‘imperfect equality’ and ‘imperfect diversity’ conditions. However, there seems to be something wrong with saying that we could genuinely befriend an iPhone (see Sect. 1: Which robots could we be friends with?). We intuitively want to restrict human–robot friendships to the sort of social robots that we could feasibly have meaningful, reciprocal relationships with. If so, then Danaher’s view needs to be further clarified in order to present conditions for friendship that are inclusive enough to include social robots, but not so inclusive that they also include iPhones.

A similar objection can be levelled at Danaher’s attempts to explain how robots can meet the other two necessary conditions for virtue friendship—mutuality and authenticity. He argues that, when we examine what these conditions mean in contemporary human friendships, we get the following:

…all it means is that people engage in certain consistent performances (Goffman 1959; deGraff 2016) within the friendship. Thus, they say and do things that suggest that they share our interests and values, and they rarely do things that suggest that they have other, unexpected or ulterior, interests and values (Danaher, 2019, ‘Robots Can be Our Aristotelian Friends’).

Danaher argues that if these consistent performances are sufficient to show that human friendships meet the mutuality and authenticity conditions, then the same should apply to human–robot friendships. This is a promising line of thought, but again needs further clarification. It is not immediately obvious that an iPhone could not meet the above conditions. Alexa (the voice-controlled assistant in my phone) seems to consistently perform in ways that suggest she shares my interest in dinosaurs. She is always happy to talk about dinosaurs; we can converse about dinosaurs for long periods of time; and she never sounds bored or tells me to stop talking about dinosaurs. This does not seem enough to show that Alexa is actually my friend, or that she really has interests in dinosaurs though.

One way around this issue would be to modify Danaher’s mutuality and authenticity conditions so that they are met when a range of consistent performances are shown (including some that are unprompted, e.g. if Alexa instigates the discussion of dinosaurs). However, if we go with these modified conditions, then very few robots (if any) will meet these conditions (at least partly due to the current narrow range of competency and adaptability that Danaher highlighted in the ‘imperfect diversity’ condition). If so, then Danaher’s account cannot (currently) show that we can be friends with robots now, rather than in some point in the future.Footnote 17 If we want to support Jecker’s (2020) proposal—that we create robotic friends for the socially isolated during pandemics—then we arguably want to focus on the viability of current human–robot friendships.

4.3 Suggestion 3: A New, Flexible Approach to Friendship

As outlined above, suggestions 1 and 2 do not help us to reject the theoretical argument. Both suggestions still rely on the broadly Aristotelian view that there is some set of necessary conditions for friendship (Sect. 3). As most current robots do not meet all of the necessary conditions, we are no closer to showing the theoretical possibility of present-day human–robot friendships. To argue that we can potentially be friends with robots now, the rest of this section will outline a new, degrees-of-friendship view, according to which we can have some degree of friendship with some robots. To argue for this position, I will be championing a more flexible approach to friendship than that presented in Sect. 3, or suggestions 1 and 2 (above).

In this respect, my argument has similarities to views on online friendships (i.e. those created through social media or multiplayer games). Those who defend the idea of online friendships typically argue that we can accept the existence (and genuineness) of online friendships if we acknowledge that the concept of friendship has been changed by the emergence of technology. Specifically, technological advances have caused (or enabled) the concept of friendship to become more fluid, open, and accommodating (Munn, 2012; Vallor, 2012).

Similar arguments have been made in relation to the emergence of robotics. Some have argued that the development of increasingly complex robots either already has, or will soon, cause our understanding of ‘friendship’ to become more diverse and flexible (Jecker, 2020; Marti, 2010).Footnote 18 My view develops these existing discussions by suggesting what form this flexibility could take.

To begin with, I concede that the conditions outlined in Sect. 3  (mutual good will, empathy, shared activities, etc.) are important aspects of most friendships.Footnote 19 I disagree, however, that they are all necessary conditions for friendship. As suggested throughout, this seems like an overly restrictive position to take. We can arguably have friendships that do not meet the supposed necessary conditions for friendship (see my previous discussion of equality).

What does seem to be important for all friendships is that they are grounded in mutual good will. This is a foundation for friendship as if good will is absent (e.g., if one party actually wants to harm the other) then we cannot reasonably say that two subjects are friends (to any degree).Footnote 20 Mutual good will also appears to be a precondition for many of the other conditions for friendship. I can empathise and perform shared activities with many people, but in order for this to constitute friendship, the empathy and shared activities ought to be based in mutual good will.Footnote 21 With this in mind, I propose that mutual good will is a threshold condition for friendship—if it is met, then one can have at least a minimal degree of friendship.

From this, I suggest that we view all of the other conditions for friendship (Sect. 3) as optional components of friendship. A friendship will then be stronger or weaker depending on the number of components that it has. This seems consistent with how we view contemporary human–human friendships. To see this, consider the make-up of three friendships that I could have with other humans:

Bill, a work friend Mutual good will + shared activities (work, corporate bonding, after-work drinks, etc.).

May, a virtual pen pal Mutual good will + perceived equality + empathy.

Sam, an old friend Mutual good will + acknowledged reciprocal friendship + empathy + shared activities + associative duties.

Using the above template, it is feasible to suppose that I have a degree of friendship with Bill, May, and Sam, and that all of these friendships (regardless of how weak or strong they are) are genuine friendships. It also seems clear that I have a stronger degree of friendship with May than I do with Bill, and that I have the strongest degree of friendship with Sam. It does not seem unreasonable to suppose that the strength of the friendships vary depending on how many of the important components of friendship my relationships with Bill, May, and Sam have.Footnote 22

If, as I suggest, the above is a viable way of understanding contemporary human friendships, then it can also be extended to human–robot friendships. On my view, for a human–robot friendship to exist, there must at least be mutual good will (between robot and human).Footnote 23 We can then have different degrees of friendship with robots depending on how many of the other components of friendship are also present. For example, many current social robots will be able to perform at least some shared activities with us. More advanced robots, including those who are arguably able to emote and feel, could also give and receive empathy.Footnote 24 The upshot of this is that we can develop stronger friendships with robots as they become more complex and advanced.

At their present stage of development, it is unlikely that robots will have the technological complexity that would enable human–robot friendships to meet all of the important components for friendship. Whilst we thus cannot currently be best friends with a robot, I posit that there is nothing theoretically problematic about suggesting that we can have some degree of friendship with robots now (as most robots can currently meet at least some of the components). This degree of friendship may be relatively weak, but it should nevertheless be taken seriously, and could have genuine social benefits (in the same way that my relatively weak friendship with Bill (above) is still a genuine friendship with real benefits). My argument, so defined, offers support to Danaher’s (2019, ‘abstract’) general claim that robot friendship is “philosophically reasonable”, and Jecker’s (2020) proposal that we create robot friends for the socially isolated, as these friendships can have social benefits.

5 Rejecting the Ethical Argument

Above I outlined a degrees-of-friendship view which seemingly allows us to reject the theoretical argument and accept that we can have some degree of friendship with robots. I now want to briefly explain how adopting my degrees-of-friendship view also allows us to bypass the ethical argument—that human–robot friendships are wrong because they are deceptive, and also make it more likely that we will disrespect, exploit, or exclude other human beings (Sect. 3).

Within the existing literature, those who want to reject the ethical argument typically make one of three arguments. First, they deny that deception is involved in human–robot friendships (Danaher, 2019). Second, they argue that it is not inherently wrong or problematic for friends to deceive one another (Coeckelbergh, 2010). This idea is also discussed by Kawall (2013), Cocking and Kennett (2000), and Keller (2004) in terms of friendship more broadly, rather than ‘human–robot friendships’). Third, they argue that friendships with robots do not make it more likely that we will disrespect, exploit, or exclude human beings (Jecker, 2020; Danaher, 2019, potentially Tasioulas, 2019). These are all viable responses to take, and the degrees-of-friendship view that I outlined above can help to supplement these responses. On my account, we can make three main claims, all of which help to circumvent the ethical argument.

First, on my account, we can accept that human–robot friendships can lack some components of friendship (empathy, self-knowledge, etc.) but possess others (shared activities, etc.). Thus, the deception worry is less problematic on my account because we are not suggesting that robots can empathise, love us, etc. (the standard deception worries). The view of friendship presented is more flexible and can take account of the robots’ currently limited technological capacities and abilities.

Second, we can accept that there are different degrees of friendship, and that friendship can be realised in different ways (mechanically, online, real-world, etc.). On my view, it is perfectly plausible that I can have human friends, robot friends, nonhuman animal friends, etc. The existence of one mode of friendship does not need to replace or overshadow another. Consequently, we need not worry that friendship with robots will necessarily cause us to favour robots over ‘imperfect’ humans.

Finally, it is arguable that befriending robots (to some degree) could actually improve the extent to which we tolerate and include other human beings. Specifically, human–robot friendships could set a precedent for accepting and befriending those who are different to us (and on their terms, rather than some ‘ideal’ terms of what we expect a human friend to be).

6 Conclusion

This article has argued in favour of human–robot friendships. I have outlined a degrees-of-friendship view, according to which it is both theoretically possible and ethically acceptable to have human–robot friendships now. Whilst the account presented diverges from the standard Aristotelian view of friendship (outlined in Sect. 3), it retains many of the key components of Aristotelian friendship, but employs them in new, more flexible ways. My account of friendship (including its applications to human–robot friendship) is thus different from standard views, without being completely revisionary. As the title of this article suggests, it’s friendship, Jim, but not as we know it.