Skip to main content

Advertisement

Log in

Interpreting ordinary uses of psychological and moral terms in the AI domain

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical view (ordinary speakers express technical senses), the habit view (ordinary speakers subconsciously express ingrained social habits), and the emotion view (ordinary speakers express their own affective empathetic emotional states). I discuss whether these non-literalist accounts accommodate the results of relevant empirical experiments. The non-literalist accounts are shown to be implausible with respect to the ordinary use of agency-terms (e.g., “believe,” “know,” “decide,” etc.), and therefore I conclude that the concepts ordinary speakers express by agency-terms in reference to AI robots are similar to the concepts they express when applying the same terms to humans. When ordinary speakers extend emotion-terms and/or moral-patiency-terms to AI robots, however, I argue that semantic changes have taken place because ordinary speakers are in fact referring to their own affective empathetic emotional states rather than AI robots. This argument suggests that the judgments made by ordinary speakers regarding the proper referential extensions of emotion-terms and moral-patiency-terms are fallacious.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Robots are physical objects that are able to act in the world, while AI need not have the power to act in the real world (Danaher, 2019, p. 130). Throughout this paper, I use the term “AI robots” to emphasize the disjunction of robots and AI. In my usage, AI robots include humanoid robots, nonhumanoid robots, chatbots, AI programs like Siri, and so on.

  2. While this paper focuses on the referential meaning of psychological and moral terms, I acknowledge that referential meanings do not exhaust the possible meanings of the terms.

  3. For the purposes of this paper, I treat concepts as mental representations.

  4. I borrow the label “technical view” from Figdor (2018, p. 129).

  5. This paper engages particularly with empirical research on the ordinary uses of psychological and moral terms in the AI domain. I do not address the large body of work discussing non-linguistic aspects of our attitudes toward AI robots. For an introduction to that discussion, see Wykowska’s (2021) collection of empirical research in experimental psychology using AI robots to study sociocognitive mechanisms.

  6. To clarify, by claiming that the unexpected referential shift concerning agency-terms preserves semantics, I am not asserting or implying that robots make decisions in the same way that human beings do.

  7. Danaher (2021) also adopts this relational approach.

  8. In the current paper, I argue that ordinary speakers use agency-terms with the literal meaning in the AI domain, but when they extend emotion-terms or moral-agency-terms to AI robots, they fail to mean that the robots are emotional entities or moral patients. In Sect. 4, I discuss the implications of this argument for the relational approach.

  9. While cognitive empathy refers to skills of recognizing others’ emotions and taking others’ perspectives, affective empathy involves sharing others’ feelings (Caravita et al., 2009, p. 141).

  10. Marchesi et al.’s (2019, p. 7) participants were asked to move a slider toward the statement they found to be the most plausible description of the series of pictures showing iCub’s behaviors. (e.g., “iCub classifies cubes by color” vs. “iCub would like to keep this cube”).

  11. For the statements that Perez-Osorio et al. (2019) used, see Marchesi et al.’s (2019) Supplementary Materials 1 and 2.

  12. For the statements that Ward et al. (2013) used, see their Supplementary Information, Part I: Mind Indices.

  13. Dennett’s (2013, pp. 91–95) homunculus functionalism is a metaphysics of mind version of the intentional stance. In this paper, however, I do not claim that intentionality is fundamentally derived. Because I distinguish between justification and interpretation, the “technical view,” as I use it here, merely holds that ordinary speakers use folk psychology in the human domain.

  14. Dennett (2017, 2019) criticizes attempts to design AI robots to trigger overly social and emotional responses in people or to possess psychological properties.

  15. Huebner (2010, p. 150) claims that no single uniform strategy is adopted by people in ascribing mental states, and suggests two sorts, namely the intentional strategy (i.e., a strategy that is sensitive to consideration of agency, thus beliefs, decision, etc.) and the personhood strategy (i.e., a strategy that is sensitive to consideration of desires and emotions). To simplify matters, under the technical view, I do not differentiate between the two types of psychological terms. But I do apply the distinction to the habit and emotion views.

  16. For details of Marchesi et al.’s experiment, see Sect. 2.1.

  17. Marchesi et al. speculate that the intentional use of psychological terms could have depended on specific scenarios, such as pictures comprising the sequence of iCub’s behaviors and psychological words used in the description (e.g., “cheat” or “understood”), as well as on individual differences among participants, such as cultural backgrounds (p. 9). These possible factors involved in the adaptation of the intentional stance do not need to be discussed in detail to argue against the technical view. However, these factors will be extensively discussed in relation to the objections to the philosophical significance of the data, namely the habit view and the emotion view.

  18. Abubshait et al. (2021) replicated Marchesi et al.’s (2019) experimental design and obtained results consistent with those of the earlier study.

  19. Bossi et al. (2020) used Marchesi et al.’s (2019) experimental design.

  20. Several other studies in addition to Bossi et al.’s (2020) have found sociocognitive mechanistic similarities (particularly concerning the intentional stance) between the human and AI domains. For example, Wykowska et al. (2014) found that robot actions elicit similar perceptual effects as representations of human actions. Other studies (Abubshait & Wykowska, 2020; Ciardo et al., 2020; Hinz et al., 2019; Wiese et al., 2012) have shown that humans and robots can influence human behavior in similar ways.

  21. Marks of the personal/subpersonal distinction include rationality and beliefs (Drayson, 2014).

  22. By the input/output account, Alexander (2012) means “given scenario x under conditions y, a certain percentage of subjects give answer z” (p. 33).

  23. For the statements used by Marchesi et al. (2019), see the InStance Questionnaire in their Supplementary Material 2 English (p. 11).

  24. See Sect. 2.1 of this paper for details of Perez-Osorio et al.’s (2019) experiment.

  25. Kneer and Stuart’s (2021) experimental results also provide evidence of the mindful use of the agency-term “know,” as their participants’ use of the term correlated with the participants’ awareness of the robots’ cognitive abilities (p. 410).

  26. It’s worth stressing that I am not arguing for some particular theory of philosophy of mind (e.g., functionalism)—the current paper is not about the metaphysics of mind.

  27. In Ward et al.’s (2013) experiment, different participants were assigned to the robot group, the persistent vegetative state patient group, and the corpse group, respectively (i.e., the types of vignettes were between-subject variables). However, all the participants read instances of violations of the social rule that one should not intentionally harm others. For example, “Participants in the harm condition read that the patient’s nurse intentionally unplugged Ann’s food supply every evening with the intention of starving her and obtaining money from a distant relative named in the patient’s will” (p. 1439).

  28. Huebner (2010, pp. 150–151) also distinguishes between the ways we use agency-terms and emotion-terms: “we distinguish two sorts of strategies that we adopt in evaluating the ascription of mental states to various entities. The first is a strategy that is sensitive to considerations of agency; the second is sensitive to considerations of personhood,” where the latter focuses “on the states that allow an entity to be concerned with how things go for her.” In contrast to the emotion view, however, Huebner takes the commonsense understanding of the mind to be intimately tied to philosophically informed intuitions about the mind (p. 154).

  29. Lonigro et al. (2017, p. 5) assessed cognitive empathy by testing whether their participants distinguished characters’ real feelings from the emotions the characters showed to other people, and they assessed affective empathy with items such as “when somebody tells me a nice story, I feel as if the story is happening to me.”.

  30. Pleo is “capable of showing emotional reactions such as joy and fear, and…believable pain reactions” (Rosenthal-von der Pütten et al., 2013, p. 21).

  31. The video types were within-subject variables.

  32. I suspect that the data of Stuart and Kneer (2021) and Huebner (2010) also imply a strong correlation between the attribution of emotions and affective empathy. The participants in these studies attributed emotion-related states (such as “desire,” “pain,” and “happiness”) significantly less frequently than they attributed agency-related states (such as “believe” and “know”) to AI robots. Both studies used vignettes (Huebner, 2010, p. 138; Stuart & Kneer, 2021, p. 10) that explain the robots’ cognitive capacities but contain little information that would lead readers to feel positive or negative affective empathy for the robots. As a result, I argue, the participants did not affectively empathize with the robots’ purported feelings (as described in the vignettes), nor did they attribute emotion-related states to the robots.

  33. Wang and Krumhuber (2018) treated types of robots as between-subject variables in one study and as within-subject variables in another study. They provided the following descriptions to participants (p. 4): ““This robot can quickly learn various movements from demonstrators and also make additional changes either to optimize the behavior or adjust to situations.” In order to assign economic versus social value, these profile descriptions were combined with information about the robot’s corresponding function: (a) economic condition: e.g., “Therefore, this robot can work as a salesperson in stores and supermarkets, guiding customers to different products and answering their inquiries” and (b) social condition: e.g., “Therefore, this robot can work as a social caregiver, keeping those socially isolated/lonely people accompanied, reminding them of their daily activities and having conversations with them”.”.

  34. Wang and Krumhuber (2018, p. 7) write that “while an economic function implies cognitive skills, it is the social function that makes robots capable of experiencing emotions in the eyes of the observer.” Nonetheless, they do not explain why the social function leads to the ascription of emotions.

  35. Shin (2021, p. 212) used Wang and Krumhuber’s (2018) descriptions of social- and economic-robots (see footnote 33).

  36. In this paper’s discussion, I ignore the outliers (i.e., those who ascribed emotions to the economic-robot without experiencing affective empathetic feelings). With respect to agency-terms, nearly half of Marchesi et al.’s (2019) participants chose the intentional stance over the design stance. In Wang and Krumhuber’s (2018) and Shin’s (2021) studies, however, participants disagreed with a statement describing emotional states of an economic-robot at significantly higher rates than they disagreed with a statement describing emotional states of a social-robot.

  37. Determining whether the semantic expansion of the proper domain of agency-terms deserves serious philosophical consideration would require further discussion, for the following reasons. First, Marchesi et al. (2019) remark that their participants’ explanations of iCub’s behavior were somewhat biased toward the mechanistic stance. Second, in Mikalonytė and Kneer’s (2022) experiment, participants were much less willing to ascribe artistic beliefs and intentions (or desires) to AI robots compared to non-artistic psychological states. These experimental results suggest that the ordinary expansion of the proper domain of agency-terms depends on the specific type of agency-term.

  38. This interim conclusion (Sect. 2.4) mentions two interesting topics for future research: (1) Why do we not feel affective empathy for an economic-robot, and therefore not consider it as a sentient being? (2) Why does affective empathy play an essential role in the AI domain (and the geometric shape domain), but not in the human domain, when it comes to treating entities as sentient beings? These are important questions for researchers to consider in order to better understand how we attribute sentience to entities and how our emotional responses play a role in that process.

  39. In Sect. 3, I assume that the same concept of moral patiency is expressed in both human and non-human-animal domains. I avoid discussing the moral status of AI robots in relation to the moral status of non-human animals, as discussing animal ethics would complicate the topic, and I want to strictly focus on the semantics of moral terms used in the AI domain in relation to persons.

  40. Wang and Krumhuber’s (2018) provided the following descriptions to their participants (p. 5): “According to a recent business report, this [economic] robot is predicted to be of high (low) economic value. By economic value, we mean the expected financial benefits and corporate profits they are going to bring to the corporate world. … According to a recent social report, this [social] robot is predicted to be of high (low) social value. By social value, we mean the expected the social support and companionship they are going to bring to the human society.”.

  41. To measure physiological arousal, Rosenthal-von der Pütten (2013, p. 23) used a multi-modality physiological monitoring device that encodes biological signals (i.e., skin conductance responses) in real time.

  42. See Sect. 3.1 for details of Shin’s (2021) experiment.

  43. Lonigro et al.’s (2017, p. 5) participants were tested for the ability to understand moral emotions (happiness, sadness, anger, and guilt) of fictional characters in moral stories with pictures.

  44. I ignore the outliers (i.e., those who ascribed moral patiency to the economic-robot without experiencing affective empathetic feelings) for the same reason explained in footnote 36, namely, that the outliers do not reflect the typical ascription of moral patiency to AI robots.

  45. The studies cited demonstrate that ordinary speakers not only view AI robots as agents, but also hold them morally responsible for their actions (e.g., Hong et al., 2020; Kneer, 2021; Kneer & Stuart, 2021; Stuart & Kneer, 2021) and even consider them blameworthy or deserving of punishment (e.g., Kahn et al., 2012; Lima et al., 2021). However, the relationship between agency and blame/punishment is complex and beyond the scope of this paper. Future research could explore whether the meaning of moral-agency-terms changes in the context of AI, compared to their usage in the human domain.

References

Download references

Acknowledgements

I would like to thank Heeok Heo, Hong-Im Shin, and On-Soon Lee for their questions and encouragement during my initial attempts at working out some of these issues; and to participants in the May 2022 102nd monthly workshop at Sogang University’s Institute of Philosophical Studies, particularly Sangkyu Shin for helpful comments on an earlier version of this paper; and to participants in the October 2022 “Science, Technology, and Humanities” conference at Kyung Hee University’s Institutes of Humanities, especially Poong Shil Lee for inviting me to the conference. I am also very grateful to two anonymous reviewers for their detailed critical comments on previous drafts, which led to significant improvements and clarifications throughout.

Funding

This work was supported by a research promotion program of SCNU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyungrae Noh.

Ethics declarations

Conflict of interest

The author declares that he has no conflicts of interest or competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Noh, H. Interpreting ordinary uses of psychological and moral terms in the AI domain. Synthese 201, 209 (2023). https://doi.org/10.1007/s11229-023-04194-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04194-3

Keywords

Navigation