Skip to main content
SearchLoginLogin or Signup

Belief in Sharing the Same Phenomenological Experience Increases the Likelihood of Adopting the Intentional Stance Toward a Humanoid Robot

Volume 3, Issue 3: Autumn 2022. DOI: 10.1037/tmb0000072

Published onJul 07, 2022
Belief in Sharing the Same Phenomenological Experience Increases the Likelihood of Adopting the Intentional Stance Toward a Humanoid Robot
·

Abstract

Humans interpret and predict others’ behaviors by ascribing intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the “human model” to understand the behaviors of these new social agents. Thus, in a series of three experiments, we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior would modulate their initial tendency to adopt the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged toward a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a “like-me” impression and social bonding increase the likelihood of adopting the intentional stance.

Keywords: intentional stance, shared phenomenological experience, social bonding, human–robot interaction, Wizard-of-Oz

Supplemental materials: https://doi.org/10.1037/tmb0000072.supp

Acknowledgments: The authors would like to thank Nicolas Spatola for his help and advices in performing the InStance test (IST) split half, and Giulia Siri for her help in recording the videos for the validation of the behaviors.

Funding: This work has received support from the European Research Council under the European Union’s Horizon 2020 research and innovation program, ERC Starting Grant, G.A. number: ERC-2016-StG-715058 awarded to Agnieszka Wykowska. The content of this article is the sole responsibility of the authors. The European Commission or its services cannot be held responsible for any use that may be made of the information it contains.

Disclosures: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Author Contributions: Serena Marchesi, Jairo Perez-Osorio, and Agnieszka Wykowska designed the experiments. Serena Marchesi and Davide De Tommaso created the robot behaviors. Davide De Tommaso programmed and implemented the behaviors on the robot. Serena Marchesi performed data collection and ran the analyses. Serena Marchesi and Agnieszka Wykowska discussed the data. Serena Marchesi, Jairo Perez-Osorio, and Agnieszka Wykowska wrote the article. All authors contributed to reviewing the article and approving it.

Data Availability: The data sets for this study will be available upon acceptance in the following OSF repository https://osf.io/xnm5c/. Preprint of this study is available at https://psyarxiv.com/te8rb/.

Correspondence concerning this article should be addressed to Agnieszka Wykowska, Social Cognition in Human–Robot Interaction, Istituto Italiano di Tecnologia, Via Enrico Melen, 83, Genova, 16152, Italy [email protected]


Being intrinsically social, humans need to develop the ability to interpret and understand the behaviors of others occupying the same environment (Baron-Cohen et al., 1999). Meltzoff suggests (Meltzoff, 2007) that the way we learn to understand others is through learning about ourselves and subsequently perceiving (and explaining) others “like me.” The author proposes that understanding the similarities between the self and the other is the foundation of social cognition. This basic knowledge and ability provide toddlers (and, later in life, adults) a framework to interpret others’ behaviors.

The most efficient strategy to predict and interpret humans’ behavior (others’ and one’s own) is to refer to underlying inner mental states, such as desires, intentions, and beliefs (Dennett, 1987; Fletcher et al., 1995; Frith & Frith, 2012; Gallotti & Frith, 2013). Interestingly, referring to others’ mental states to explain behavior might not be limited to only humans. Evidence showed that attribution of mental states to others occurs also with respect to nonhuman entities (Apperly & Butterfill, 2009; Butterfill & Apperly, 2013; Happé & Frith, 1995; Heider & Simmel, 1944).

Given that artificial agents, such as humanoid robots, are increasingly populating our daily lives in various contexts (Prescott & Robillard, 2021; Samani et al., 2013), it remains to be answered whether we deploy similar sociocognitive mechanisms to interpret their behavior as we do toward other humans (Hortensius & Cross, 2018; Wykowska, 2021; Wykowska et al., 2016). When it comes to unfamiliar agents, Wiese et al. (2017), along with recent literature, suggest that we might interpret their behaviors as if they were intentional because this is the default way of making sense of the social world (Airenti, 2018; Urquiza-Haas & Kotrschal, 2015; Wiese et al., 2017). This strategy is spontaneous, quick, with a high ratio of benefit versus cost. It is very efficient in interpreting people’s behavior. It has been trained from very early stages of cognitive development and constitutes the default strategy in understanding and predicting human (or any complex) behavior (Perez-Osorio & Wykowska, 2020). Therefore, when facing novel agents, we apply the schema and knowledge we are most familiar with: the “human model” (Perez-Osorio & Wykowska, 2020; Wiese et al., 2017). This reasoning is in line with the “like me” account of Melzoff, and recent literature shows that this account can be applied to human–robot interaction (Riddoch & Cross, 2021). In this context, empirical studies have investigated whether humans would indeed interpret the behavior of artificial agents by ascribing to them mental states as automatically as they do toward other human agents (Abu-Akel et al., 2020; Gallagher et al., 2002; Marchesi et al., 2019; for a review, see Perez-Osorio & Wykowska, 2020). In other words, literature investigated whether humans would adopt the intentional stance (Dennett, 1971, 1983, 1987) toward artificial agents.

The Intentional Stance Framework

Dennett’s theoretical framework accounts for different strategies that humans might adopt when facing the need to interpret another entity’s behavior. These strategies, or “stances,” explain and predict the behavior with reference to different levels of abstraction: 1—with reference to the physical domain of the agent. For example, in the case of an artificial agent, such as a humanoid robot that shows a movement to reach and grasp a bottle one could explain the behavior of the system with the following reasoning: The electrical current that moves the motors overcoming the friction of the internal parts of the robot arm which moves it toward the bottle and then closes the fingers over the object (physical stance); 2—with reference to how the system was designed to function. For example, one can expect that a humanoid robot would grasp a bottle because it has been programmed to do so when a command is given (design stance); 3—with reference to mental states and beliefs of the agent, that is, a humanoid robot grasps a bottle because it wants to do so (intentional stance). According to Dennett (1987), while the first two levels apply to all systems, the third stance has stricter assumptions on the type of agents for whom the stance works efficiently. Dennett (1987) describes the process of adopting the intentional stance as follows: The observer first decides to treat the observed agent as rational. Then, the observer interprets the agent’s mental states (i.e., desires or beliefs that the agent might have). Finally, based on these assumptions, the observer predicts that the agent will act pursuing its goals based on its mental states. Therefore, when adopting the intentional stance, we assume that the behavior we are predicting is the most rational one that the agent can exert in that context, given their beliefs, desires, and constraints. Dennett highlights that any system can be treated as a rational (and intentional) agent. However, only for truly intentional agents (“true believers”), the intentional stance is the most efficient strategy. For other agents or systems, it makes more sense to switch to a different, more efficient, stance (i.e., the design or the physical stance; Dennett, 1981).

In the context of investigating the adoption of intentional stance toward artificial agents, special interest has been given to humanoid robots, since they represent entities that are somewhat “in-between.” As hypothesized by the new ontological category theory (NOC; Kahn & Shen, 2017) and similarly discussed by Wiese et al. (2017), on the one hand, as man-made artifacts, humanoid robots should elicit the adoption of the design stance. On the other hand, given their shape, physical features, and perhaps behavior, they might evoke the human (intentional) model. Thus, humans might have a tendency to anthropomorphize humanoid robots by ascribing them typically human characteristics (Airenti, 2018; Epley et al., 2007, 2008; Złotowski et al., 2014). As argued by Spatola et al. (2021), intentional stance is a similar, but slightly different, concept than anthropomorphism. That is because anthropomorphism refers to the general tendency to attribute human characteristics to nonhumans, for example, saying that a wooden stick has “legs and arms” (Epley et al., 2008; Waytz, Morewedge, et al., 2010). On the other hand, the intentional stance refers specifically to mental states. Thus, intentional stance and anthropomorphism, although closely related, should be considered as separate concepts. In the case of human–robot interaction, one could, for example, expect that the higher the individual likelihood of adoption of the intentional stance toward a robot, the higher the tendency to anthropomorphize it.

Operationalization of the Intentional Stance in Human–Robot Interaction

Recently, several authors empirically investigated the adoption of the intentional stance toward robots (Marchesi, Spatola, et al., 2021; Marchesi et al., 2019; Thellman & Ziemke, 2020; Thellman et al., 2017). For instance, Thellman and colleagues (Thellman et al., 2017) exposed participants to images of humans and humanoid robots. Participants’ task was to rate the perceived level of intentionality of the depicted agent. Participants reported similar levels of perceived intentionality between the two agents’ behaviors. Marchesi and colleagues (Marchesi et al., 2019) addressed the challenge of operationalizing the philosophical concept of the adoption of the intentional stance toward a humanoid robot by creating a new tool, the InStance test (IST), to assess people’s individual tendency to attribute intentionality to a humanoid robot. The IST includes 34 pictorial scenarios (each containing three pictures) depicting the iCub humanoid robot (Metta et al., 2010). Each scenario is associated with two descriptions: one always explains the robot behavior with reference to a mechanistic vocabulary (mechanistic description), the other always describes the robot behavior with reference to a mental state (mentalistic description). In other words, one sentence is related to the adoption of the design stance, the other one instantiates the adoption of the intentional stance. In the original study (Marchesi et al., 2019), participants were asked to move a cursor along a slider, toward the description that best represents their interpretation of the observed scenario. Results showed that participants had an overall slight bias toward the mechanistic option at a group level, but there was some tendency to also adopt the intentional stace. This means that, in line with the NOC hypothesis (Kahn & Shen, 2017), they were not firmly adopting the design stance, but were quite unsure about which stance was optimal to adopt to interpret the behaviors of the robot. Indeed, depending on the scenario and individual tendencies, participants were prone to adopt one or the other stance toward iCub. This result was further investigated by Marchesi, Spatola, et al. (2021) by adapting the IST to a two-alternative forced-choice task (2AFC): each scenario from the IST was shown twice (associated either with the mechanistic description or with the mentalistic option). Participants were asked to judge whether the description they were reading was fitting the scenario they were observing (yes/no choice). Moreover, they created a version of the IST with a human character instead of the robot, to compare the stance adoption between the two agents. The authors reported that, although participants were more prone to accept a mechanistic explanation (compared to a mentalistic one) for the robot, no difference was found in participants’ response times during the choice of the stance to adopt toward the robot. Interestingly, the results of Marchesi et al.’s studies (Marchesi, Bossi, et al., 2021; Marchesi et al., 2019) showed also that individuals differed in their bias in adopting either one or the other stance toward a humanoid robot. Bossi and colleagues (Bossi et al., 2020) later found that it is possible to predict this individual bias in adopting the intentional or the design stance from neural oscillatory patterns during the resting state (i.e., before any task is given to participants). This suggests that the adoption of the intentional stance may be the default and spontaneous way of making sense of other agents (Abu-Akel et al., 2020; Meyer, 2019; Raichle, 2015; Schilbach et al., 2008; Spreng & Andrews-Hanna, 2015; Waytz, Cacioppo, et al., 2010). Moreover, recent studies reported that the spontaneous adoption of intentional stance toward robotic agents might be elicited by the individual tendency to anthropomorphize nonhuman agents (Marchesi, Spatola, et al., 2021; Spatola et al., 2020). To further explore the relationship between anthropomorphism and the adoption of the intentional stance, Spatola and colleagues (Spatola et al., 2021) explored the psychometric structure of the IST, testing its internal and external validity, with a specific interest toward anthropomorphism. The authors reported a two-factor structure that correlates with anthropomorphic attribution of robotic agents and concluded that although intentional stance and anthropomorphism rely on similar constructs (such as social cognition), they remain two distinct concepts as the correlation indices were medium–low.

Aim of Study

The present study aimed at examining whether interaction with the humanoid robot iCub in a naturalistic context modulates the general tendency to adopt intentional stance toward the robot. More specifically, we addressed the question of whether creating a “like me” context through human-like behavior and social bonding would increase the likelihood of adopting the intentional stance, while generating a “different-from-me” mechanistic behavior would have the opposite effect. To this aim, we conducted a series of three experiments: In Experiment 1, participants experienced a social context of watching movies together with the iCub robot. In line with Meltzoff’s account (Meltzoff, 2007), we created a context that should affect adoption of the intentional stance through the “like-me” impression of the robot displaying human-like contingent emotional reactions to the events in the movies. In addition, the context should create social bonding with iCub through the phenomenological experience of sharing a familiar social situation. We hypothesized that this manipulation should have activated the “human” model leading to the adoption of the intentional stance toward the iCub. We measured whether the experimental manipulation would affect the degree to which intentional stance was adopted by administering half of the items of the IST before the interaction with the robot and the other half, after the interaction. In Experiment 2, we aimed at replicating the results of Experiment 1, and we tested the validity of the IST keeping the social interaction with the robot identical to Experiment 1 and changing the way IST was split into pre- and post-interaction items. In Experiment 3, the social context of watching the video remained the same as in Experiments 1 and 2. However, the “like-me” behavior was no longer present, as the robot was made to behave in a mechanistic, robotic manner. We hypothesized that this should reduce (or eliminate) the “like-me” impression and social bonding. The robot’s behaviors were programmed to display very repetitive and mechanical movements. As results from Marchesi et al. (2019) show an overall bias toward the mechanistic option at the group level, we did not expect participants to exhibit a completely “mentalistic” score in the IST. Nonetheless, we expected a modulation of the initial overall tendency toward higher IST scores after being exposed to the embodied robot exhibiting either human-like or mechanistic behaviors.

Robot Platform and Experimental Measures

Robot Platform and Behaviors

The iCub is a humanoid robotic platform with 53 degrees of freedom (DoF; Metta et al., 2010). Its design allows the investigation of human social cognition mechanisms by generating a context of interaction of high ecological validity. iCub can reliably perform human-like movements and thereby can be used as a “proxy” of social interaction with another human. In Experiments 1 and 2, we designed three different behaviors of the robot, which were reactions (sadness, awe, and happiness) of the robot to the displayed videos. To implement movements that would be perceived as human-like as possible, the behaviors followed the principles of animation (Sultana et al., 2013) and were implemented via the middleware Yet Another Robot Platform (YARP; Metta et al., 2006) using the position controller following a minimum jerk profile for head, torso, and arm joint movements. The gaze behavior was implemented using the 6-DoF iKinGazeCtrl (Roncone et al., 2016) which uses inverse kinematics to produce eye and neck movements. Behaviors were programmed to occur in specific timeframes, corresponding to the apex event of each video. Moreover, to maximize the human likeness during the verbal interaction at the beginning and the end of the robot session, the verbal emotional reactions and sentences were prerecorded by an actor and digitally edited to match the childish appearance of the iCub using Audacity Cross-Platform Sound Editor. The greeting sentences at the beginning and the end of the experiment were played by the experimenter via a Wizard-of-Oz manipulation (WoOz; Kelley, 1983). The WoOz manipulation consists of an experimenter completely (or partially) remotely controlling a robot’s actions during an interaction (movements, speech, gestures, etc.; for a review, see Riek, 2012). This method allows researchers to elicit more natural interaction between the robot and the participant, in the absence of artificial intelligence (AI) solutions, which would allow the robot to behave in a similar manner autonomously. In addition, since the robot would directly address the participants during the WoOz interaction, cameras from the robot’s eyes were actively recognizing participants’ faces, to create mutual gaze between the iCub and participants. Mutual gaze in human–robot interaction has been shown to be a pivotal mechanism that influences human social cognition (Kompatsiari et al., 2018, 2021). Facial expressions on the robot were programmed to display the three different emotions (sadness, awe, and happiness) via the YARP emotion interface module.

In Experiment 3, we designed the behavior of the robot in reaction to the videos in such a way that it would always perform the same repetitive moments of the torso, head, and neck. Cameras were deactivated and, thus, there was no mutual gaze between the robot and participants. The WoOz manipulation was replaced with preprogrammed robotic actions, such as the calibration of joints. The verbal interaction was replaced with a verbal description of the robot’s calibration sequences created and played via text-to-speech. All the emotional sounds reproduced during Experiments 1 and 2 during the videos were replaced with a “beep” sound. In all three experiments, all sound and recordings were played via two speakers positioned behind the robot, creating the impression that the source of the sound is the robot itself. Videos of the behaviors and verbal scripts are available at https://osf.io/xnm5c/.

Experimental Procedure and Measures Common Across All Three Experiments

The experimental structure consisted of three main parts that were identical across all three experiments.

Part 1—IST Preinteraction

Participants would complete the first half of the IST, Marchesi et al., 2019 to assess their initial tendency to adopt the intentional stance toward robots. In Experiment 1, the IST split was conducted in accordance with Marchesi and colleagues (Marchesi, Bossi, et al., 2021), by assigning items to Group A or B in a way to obtain two groups with comparable means and standard deviation of the InStance score (based on data from Marchesi et al., 2019). In Experiment 2, the IST split was in accordance with the psychometric structure that emerged from the original IST data set (Marchesi et al., 2019), following the method proposed by Spatola, Marchesi, and Wykowska (Spatola et al., 2021). Spatola et al. describe a two-factor structure of the IST, one involving mostly the “alone robot” construct, and the second a “social” construct where the robot is depicted in the presence of another human. Thus, we performed a factorial analysis on the data set reported by Marchesi et al. (2019) and split the 34 items of the IST balancing the emerged factors in the two halves. In all experiments, the presentation of the two groups of items (Groups A and B) was counterbalanced across participants between pre- and post-interaction with the robot.

Figure 1

Example of Item From the IST (Marchesi et al., 2019)
Note. IST = InStance test.

Regarding the IST task per se (preinteraction), participants observed scenarios depicting the iCub robot and they had to drag a slider toward the description of the scenario that they found fitting best to what is displayed in the pictures (Figure 1). After completion of the IST, they would fill out the questionnaire to assess their negative attitudes toward robots (Negative Attitudes toward Robots Scale [NARS]; Nomura, 2014; Nomura et al., 2011). Moreover, in Experiments 2 and 3, we also assessed participants’ personality phenotype (Big Five Inventory [BFI]; Goldberg, 1993). The reason for assessing individual attitudes and personality phenotypes is their potential influence on (social) cognition mechanisms (for a review, see Evans, 2008). In particular, it was pivotal to us to assess our participants’ negative attitudes toward robots before the actual interaction, as recent literature reported the influence of such individual biases in human–robot interaction (Ghiglino et al., 2020; Spatola & Wudarczyk, 2021; for a review, see Naneva et al., 2020).

Part 2—Interaction Session

Participants were then instructed to sit beside the robot (1.30 m distance) in a separate room, and they were told that the task would consist of watching three documentary videos with the robot. Each video was edited to last 1.21 min, for a total duration of 4.3 min. In Experiments 1 and 2, before and after the videos, the robot interacted with the participants via a WoOz manipulation. In more detail, the robot would greet participants, introduce itself, ask participants’ names, and invite them to watch some videos together. At the end of the videos, the robot would say goodbye to the participants and invite them to proceed to fill out some questionnaires. In Experiment 3, participants were not exposed to any type of social interaction with the robot. The robot only issued verbal utterances about the calibration process it is undergoing (script of the interactions is available under the following link: https://osf.io/xnm5c/).

Part 3—IST Postinteraction

After the interaction session with the robot, participants were asked to complete three questionnaires. First, they completed the second half of the IST to assess whether the robot session modulated their initial tendency to adopt the intentional stance. Subsequently, to assess participants’ attitudes toward the robot after the interaction session, they completed the Robotic Social Attitudes Scale (RoSAS) (Carpinella et al., 2017) and a set of seven questions from Waytz and colleagues (Ruijten et al., 2019; Waytz, Cacioppo, et al., 2010) to assess their tendency to attribute a mind, morality, and reasoning to the robot. In addition, in Experiments 2 and 3, participants completed the Godspeed questionnaire (Bartneck et al., 2009) to assess their level of anthropomorphism of the robot. RoSAS, Waytz and colleagues, and the Godspeed questionnaires were completed by participants with the explicit instruction to keep in mind the robot with which they just completed the task (see Supplemental Material).

All tests and questionnaires presented in all three experiments were administered in Italian and through Psychopy (v2020.1.3; Peirce et al., 2019) or Opensesame (v3.2.5; Mathôt et al., 2012). All analyses were conducted with JASP0.14.0.1(JASP Team, 2022). Three separate samples were collected, one for each experiment; therefore, there was no overlap of participants between the experiments. Moreover, to control for potential initial differences in participants’ likelihood of adopting the intentional stance, we compared the IST_pre among the three experiments. Results reported no statistical differences in participants’ tendency to adopt the intentional stance before interacting with the robot (see Supplemental Material). All participants received monetary compensation of €30.

Experiment 1

Participants

Forty participants took part in the study. The study was approved by the local Ethical Committee (Comitato Etico Regione Liguria) and was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Each participant provided written informed consent before taking part in the experiment. All participants were naïve to the purpose of this experiment. Data from one participant were excluded from the analyses due to technical problems that occurred during data collection. The final sample was N = 39 (Mage = 25, SDage = 4.75, R = 19–42, 28 females).

Analyses

To test whether belief in sharing the same phenomenological experience with a humanoid robot would enhance the adoption of the intentional stance, we first recoded participants’ choices in the IST so that they would range from 0 = totally mechanistic to 100 = totally mentalistic. Subsequently, we conducted a paired sample t test between the mean score at IST pre- and post-interaction with the iCub.

Results

Results showed a significant difference between the mean IST preinteraction score and the mean IST postinteraction score, t(38) = −3.44, p = .001, 95% CI [11.51; −2.98], d = −0.55 (Table 1).

Table 1
Results From Paired Sample t Test Between IST Pre- and IST Post-Experiment 1

IST_pre

IST_post

t

df

p

mean difference

SE difference

95% CI for mean difference

Cohen’s d

95% CI for Cohen’s d

Lower

Upper

Lower

Upper

MIST_Pre 42.12,
SDIST_Pre 18.33

MIST_Post 49.37,
SDIST_Post 20.12

−3.44

38

.001

−7.25

2.10

−11.51

−2.98

−0.55

−0.88

−0.21

Note. IST = InStance test. Student’s t test.

After sharing a familiar context with the robot reacting in a human-like emotional manner contingent on the events in the videos, participants chose more often the mentalistic description, leading to an overall mean IST postinteraction score higher (MIST_Post = 49.37, SDIST_Post = 20.12) than the mean IST preinteraction score (MIST_Pre = 42.12, SDIST_Pre = 18.33).

Discussion Experiment 1

The main aim of Experiment 1 was to test whether the likelihood of adopting the intentional stance would be increased by creating a familiar social context that presumably elicits bonding and where the robot induces a “like-me” impression. Results showed that after the social interaction with the robot, participants indeed scored higher in the IST, meaning that they chose more often the mentalistic description of IST items in the postinteraction IST, relative to the preinteraction IST.

Experiment 2

To test the reliability of the effect observed in Experiment 1, we conducted a follow-up experiment where we kept the robot interaction session identical to Experiment 1, but we changed the way the IST items were split into pre- and post-interaction halves (see Experimental Procedure and Measures Common Across All Three Experiments section above).

Participants

Forty-one participants took part in the study and received monetary compensation of €30. The study was approved by the local Ethical Committee (Comitato Etico Regione Liguria) and was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Each participant provided written informed consent before taking part in the experiment. All participants were naïve to the purpose of this experiment. Data from one participant were excluded from the analyses due to technical problems that occurred during data collection. The final sample was N = 40 (Mage = 29.12, SDage = 8.87, R = 18–60, 23 females).

Analyses

To test whether the experience of a shared social context with a humanoid robot, presumably eliciting bonding and “like-me” impression would enhance the adoption of the intentional stance, we first recoded participants’ choice in the IST so that it would range from 0 = totally mechanistic to 100 = totally mentalistic. Subsequently, we conducted a paired sample t test between the mean score at IST pre- and postinteraction with the iCub. Results showed a significant difference between the mean IST preinteraction score and the mean IST postinteraction score, t(39) = −5.31, p ≤ .001, 95% CI [17.07; −7.65], d = −0.84.

Results confirm findings from Experiment 1. Indeed, participants chose more often the mentalistic description after the interaction with the robot, leading to an overall mean IST postinteraction score higher (MIST_Post = 54.08, SDIST_Post = 16.65) than the mean IST preinteraction score (MIST_Pre = 41.71, SDIST_Pre = 14.27; Table 2).

Table 2
Results From Paired Sample t Test Between IST Pre- and IST Post-Experiment 2

IST_pre

IST_post

t

df

p

Mean difference

SE difference

95% CI for mean difference

Cohen’s d

95% CI for Cohen’s d

Lower

Upper

Lower

Upper

MIST_Pre 41.71
SDIST_Pre 14.27

MIST_Post 54.08
SDIST_Post 16.65

−5.31

39

<.001

−12.36

2.33

−17.07

−7.65

−0.84

−1.19

−0.47

Note. IST = InStance test. Student’s t test

Discussion Experiments 1 and 2

The main aim of Experiment 2 was to replicate and confirm results from Experiment 1 which showed an increased likelihood of adopting the intentional stance after an interaction with the iCub robot in a familiar social context which presumably elicits social bonding and a “like-me” impression. To this aim, we split IST items into pre- and post-interaction halves considering the psychometric structure of the IST (Spatola et al., 2021). Results confirmed the findings of Experiment 1 since participants indeed scored higher (i.e., increased their IST score) in the IST after the interaction with the robot, relative to the score before the interaction.

Overall, our results confirmed our hypothesis that creating a familiar social context of sharing an experience and social bonding, together with human-like behavior that might be interpreted as “like-me” increases the tendency to adopt the intentional stance toward a humanoid robot.

Experiment 3

Aim of Experiment 3

Since Experiments 1 and 2 results showed that it is possible to increase the likelihood of the adoption of the intentional stance through a social context, shared experience, and human-like behaviors (emotionally contingent on the events in the video), we needed to test whether the effect was indeed due to our experimental manipulation or rather due to simple exposure to the robot. To this end, we conducted Experiment 3 in which the robot displayed repetitive and mechanistic behaviors in the same social context. We reasoned that behaviors that are not human-like and not emotionally contingent on the events occurring in the videos should disrupt the social bonding and the “like-me” impression. This in turn should not increase the likelihood of adopting the intentional stance after the interaction.

Participants

Forty-one participants took part in the study and received a monetary compensation of €30. The study was approved by the local Ethical Committee (Comitato Etico Regione Liguria) and was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Each participant provided written informed consent before taking part in the experiment. All participants were naïve to the purpose of this experiment. Data from one participant were excluded from analyses due to a poor understanding of the Italian language. The final sample was N = 40 (Mage = 34.27, SDage = 12.29, R = 18–54, 30 females).

Experimental Procedure and Measures

The experimental procedure and the battery of questionnaires pre- and post-interaction were the same as in Experiment 1, except for the behaviors of the robot during the Robot Session (see Experimental Procedure and Measures Common Across All Three Experiments section above). The IST was split into pre- and post-interaction items in the same way as in Experiment 2.

Analyses

To test our hypothesis, we conducted a paired sample t test between the mean score at IST pre- and post-interaction with the iCub. Results showed no significant difference between the mean IST preinteraction (MIST_Pre = 43.40, SDIST_Pre = 14.61) score and the mean IST postinteraction score (MIST_Post = 44.97, SDIST_Post = 16.30), t(39) = −0.57, p = .569, 95% CI [−7.08; 3.95], d = −0.091; c.f. Table 3.

Table 3
Results From Paired Sample t Test Between IST Pre- and IST Post-Interaction—Experiment 3

IST_pre

IST_post

t

df

p

Mean difference

SE difference

95% CI for mean difference

Cohen’s d

95% CI for Cohen’s d

Lower

Upper

Lower

Upper

MIST_Pre 43.40
SDIST_Pre 14.61

MIST_Post 44.97
SDIST_Post 16.30

−0.57

39

0.569

−1.56

2.72

−7.08

3.95

−0.091

−0.40

−0.22

Note. IST = InStance test. Student’s t test.

Discussion Experiment 3

The main aim of Experiment 3 was to test whether the effects observed in Experiment 1 and Experiment 2 were indeed due to our manipulation, rather than the mere exposure to the robot. To address this aim, we exposed participants to an interaction with a robot displaying repetitive and preprogrammed behaviors, not emotionally contingent on the events of the videos. Results of Experiment 3 showed that our participants did not increase their initial overall tendency at the group level of adopting the intentional stance after the interaction with the mechanistically behaving robot. This suggests that the effects of Experiment 1 and Experiment 2 were indeed due to our intended manipulation rather than mere exposure to the robot or a social context. Thus, we can conclude that mere exposure to a robot is not sufficient to increase the likelihood to adopt the intentional stance toward an artificial agent, as results of Experiment 3 have shown. On the other hand, creating a familiar context of shared experience with a robot that creates an impression of being “like-me” (as in Experiments 1 and 2) might induce participants to increase their likelihood of adopting the intentional stance.

Comparison Between Experiments

To confirm the impact of human-like robot behaviors on the likelihood of adopting the intentional stance, and to control for age and gender, we decided to compare the results between experiments. Specifically, we conducted an analysis comparing Experiment 2 and Experiment 3 where the IST pre- and post-interaction were administered in the same way (same way of splitting IST items into pre- and post-interaction sets) while the interaction itself differed with respect to human-likeness of robot behaviors. We first calculated the Δ-IST score as the difference between the IST post and IST pre for each participant. Subsequently, we performed an analysis of covariance (ANCOVA) considering the Δ-IST score as our dependent variable, experiment as a fixed factor, and age and gender as covariates. The Δ-IST score allows us to compare the magnitude of the modulation of the adoption of the intentional stance related to robot exposure. No main effect of gender or age emerged as significant. Furthermore, confirming previous results, the main effect of experiment emerged as significant, F(1, 76) = 6.64, p = .012, η² = 0.07. Post hoc comparisons with Bonferroni correction revealed a significant difference in the Δ-IST score between Experiment 2 and Experiment 3, t = 2.57, p = .012, 95% CI [2.18; 17.05], d = 0.6.

General Discussion

The present study aimed at examining whether people might increase their likelihood of adopting the intentional stance toward a humanoid robot in a familiar context of a shared experience of watching a movie together, in which the robot displays human-like behaviors, emotionally contingent on the events in the video, and thus presumably creating a “like-me” impression and social bonding.

To address this aim, we invited participants to watch three videos alongside the iCub robot. During the video-watching session, the robot would either exhibit an emotional and human-like reaction contingent to the narration of the videos (Experiment 1 and Experiment 2) or a very repetitive and machine-like behavior, emotionally not contingent (no emotional reactions at all) on the events of the videos (Experiment 3). Moreover, before the video session, the robot would greet and verbally interact with participants (Experiments 1 and 2) in a human-like manner (through a WoOz technique) or display a mechanistic and preprogrammed calibration behavior (Experiment 3). Our results showed that the behaviors displayed by the robot in Experiments 1 and 2 led participants to score higher (i.e., choose more mentalistic descriptions of robot behavior) in a test probing the degree of adoption of the intentional stance (the IST; Marchesi et al., 2019) after the interaction, relative to their scores before the interaction. Therefore, we can speculate that the short experience of sharing a social context with the robot presumably led participants to perceive the humanoid robot as “like-them,” increasing the likelihood of adopting the intentional stance toward the robot. Conversely, short, repetitive, and machine-like behaviors (Experiment 3) did not affect the initial degree of adopting the intentional stance toward the robot, confirming that the differential effect observed in Experiment 1 was due to the experimental manipulation of the behaviors and not to mere exposure to the robot. Our results can be possibly interpreted in light of the “like-me” account (Meltzoff, 2007), where humans interpret other humans’ behavior by making reference to their knowledge of themselves. Sharing a social context with agents whose behaviors are perceived as similar to “how I would behave,” might activate what Fuchs and De Jaegher defined as “mutual affective resonance” (Fuchs & de Jaegher, 2009). This mutual attunement between agents in a shared social experience leads to attune their affective and kinematics behaviors, ultimately resulting in a “mutual incorporation” of the other in our perception of the experience (Fuchs, 2017). Moreover recently, Higgins integrated the literature about shared experience perception by defining the “minimal relational self” (Higgins, 2020) as one ontological primitive notion of selfhood, along with the minimal experiential selfhood introduced by Zahavi (2017). Higgins argues that social interaction constitutes one of the first experiences we have as human beings, and that this develops along with the internal perception we have about our consciousness. In other words, we use social interaction to define ourselves through others, as much as we do when we use our own perception of how we experienced something. In the context of our results, we could speculate that the shared social and affective context that the participants experienced in Experiments 1 and 2, led them to build the expectation that the robot could be an interactive embodied agent able to perceive the context similar to how they were experiencing it themselves. It is also plausible that this chain of sociocognitive processes was enhanced by the anthropomorphic appearance of the iCub, and that together, these factors increased the tendency to attribute the intentional stance to the robot. However, these are speculative interpretations of the results and the exact mechanisms underlying the phenomena observed in our study will need to be examined in future research.

Our results are also in line with recent literature on the adoption of the intentional stance and mind attribution toward robot behaviors (Abubshait et al., 2021; Ciardo et al., 2021; Marchesi et al., 2020). Specifically, Ciardo et al. (2021) report that, when a robot behavior is perceived as more mechanistic in a joint task, participants decrease their likelihood of adopting the intentional stance toward it. Along similar lines, Marchesi et al. (Marchesi et al., 2020) tested whether observing a robot that exhibits variable behavior modulates the adoption of the intentional stance. The authors found that infrequent, unexpected behaviors increased the likelihood of adopting the intentional stance. Finally, Abubshait et al. (2021) found a pattern of results in a very similar direction as those presented here. In their experiment, participants performed a joint task with the iCub robot. In one condition, they believed they scored jointly with iCub while in another condition, they scored individually. The results showed that the “social framing” of the task, namely the belief that participants score as a team with iCub, increased the likelihood of adopting the intentional stance toward iCub. Hence, similarly as in the present study, the social “bonding” with the robot seemed to increase the likelihood of adopting the intentional stance.

Taken together, we argue that people might be more likely to adopt the intentional stance toward artificial agents when the agents create the impression of being “like-me” and when the context generates social bonding and shared experience. This is in line with such phenomena as shared intentionality (Dewey et al., 2014; Gilbert, 2009; Pacherie, 2014) and other effects occurring during shared social contexts (Boothby et al., 2014; De Jaegher & Di Paolo, 2007; Higgins, 2020). Our assumption is based on the idea that the humanoid appearance and the human-like reactions of the iCub robot induced in people the human model, considered in literature as a default mechanism (Urquiza-Haas & Kotrschal, 2015; Wiese et al., 2017) and, ultimately, led them to adopt the intentional stance toward it. Indeed, recent literature in human–robot interaction argues that when facing a nonfamiliar and complex agent (such as a humanoid robot), anthropomorphism is a default—an intuitive and well-known model that helps in reducing the uncertainty and the cognitive effort devoted to explain the behavior of such agent (Cacioppo & Petty, 1982; Spatola & Wykowska, 2021). Spatola and Wykowska (2021) report also that individuals with higher need for cognition can apply different models, leading to the adoption of different strategies to interpret the behavior of the robot (see also Prescott, 2017; Ramsey et al., 2021). Hence, although not directly addressed in the present study, individual dispositions might play a strong role in determining which model is more adequate to explain the behavior of a humanoid robot. Future studies should disentangle the interplay between these variables to enrich the knowledge about how social cognition mechanisms are applied when humans face artificial agents.

Limitations and Future Directions

Our study reports that an increased tendency to adopt the intentional stance is influenced by the phenomenology of shared experience with the robot, which is presumably induced by the behaviors displayed by the robot and the context of interaction. However, the present study does not directly unfold the complex interplay between participants’ perception of the shared experience per se, anthropomorphic attributions, and expectations about robots that might have played a role in the adoption of the intentional stance. The measures used in the present study do not allow for drawing concrete conclusions regarding which specific mechanisms of social cognition are involved in the perception of a shared social context when our partner is a humanoid robot. Moreover, in the present study, we reported results from self-reports, which are inherently limited when investigating implicit mechanisms of (social) cognition. Future studies should replicate the present results with larger numbers of participants from different backgrounds (i.e., cultural and educational) to evaluate the generalizability of the results. Furthermore, future research employing methods such as implicit measures (i.e., eye tracking, electroencephalography [EEG]) could help to unravel the complexity of the cognitive mechanisms underlying our findings. Additionally, it is plausible to think that various social contexts would influence the adoption of the intentional stance. Therefore, daily social contexts should be examined in future research on the topic. For example, when a robot would perform more social task, like tutoring a child in educational activity, human-like models might be used during the interaction (Ramsey et al., 2021). In contrast, when robots are used as tools in a physical interaction task, users might be more likely to deploy more object-based models. Therefore, such factors as function and purpose of a robot need to be taken into account when testing the likelihood of adopting the intentional stance toward robots (Kahn & Shen, 2017; Malinowska, 2021; Papagni & Koeszegi, 2021; Prescott, 2017).

Supplemental materials


Comments
0
comment
No comments here
Why not start the discussion?