skip to main content
10.1145/3613904.3642905acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access
Artifacts Available / v1.1

Join Me Here if You Will: Investigating Embodiment and Politeness Behaviors When Joining Small Groups of Humans, Robots, and Virtual Characters

Published:11 May 2024Publication History

Abstract

Politeness and embodiment are pivotal elements in human-agent interactions. While many previous works advocate the positive role of embodiment in enhancing these interactions, it remains unclear how embodiment and politeness affect individuals joining groups. In this paper, we explore how politeness behaviors (verbal and nonverbal) exhibited by three distinct embodiments (humans, robots, and virtual characters) influence individuals’ decisions to join a group of two agents in a controlled experiment (N=54). We assessed agent effectiveness regarding persuasiveness, perceived politeness, and participants’ trajectories when joining the group. We found that embodiment does not significantly impact agent persuasiveness and perceived politeness, but politeness does. Direct and explicit politeness strategies have a higher success rate in persuading participants to join the group at the furthest side. Lastly, participants adhered to social norms when joining at the furthest side, maintained a greater physical distance from humans, chose longer paths, and walked faster when interacting with humans.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Small group interactions [36] with humans and artificial agents in physical and virtual environments profoundly shape our social experiences [42, 67] from conversations to collaborations. Understanding and replicating these experiences in Human-Computer Interaction opens great possibilities and poses challenges, such as agents’ embodiment to express behaviors and the tone of those behaviors affected by politeness, which makes embodiment and politeness fundamental for social interactions. Embodiment [40] refers to the integration of physical or digital form within an intelligent system, which creates a connection between an agent and its interface [10, 15, 56]. Therefore, the choice of embodiment may affect how individuals perceive and respond to agents [8]. Moreover, a robot or virtual character may elicit different reactions, including social comfort, perceived intelligence, emotional connection, trust, and engagement [20, 38, 62, 68, 77]. At the same time, politeness also plays a crucial role in shaping the tone and effectiveness of small-group interactions. Politeness [13] guides individuals in expressing their needs and desires while minimizing face-threatening acts and determining the success of persuading others to maintain social harmony. Therefore, understanding how embodiment and politeness intersect in human-agent interactions is vital for designing small-group interactions between humans and artificial agents to enhance social interactions, which we explore in this paper.

Figure 1:

Figure 1: Overview of setups with three different types of embodiment, in which participants had to join a group of: (A) humans, (B) robots, and (C) virtual characters. Only in scenario C, participants were fully immersed in VR, while in A and B, they wore the VR headset for tracking purposes and it was not positioned directly in front of their eyes. The participant is standing at the starting position (A), the participant joined at the furthest side of the group (B), and the participant joined at the closest side (C).

Previous work has explored the impact of embodiment and politeness [7, 40, 82] in various contexts [2, 17, 61, 70, 73] and demonstrated the positive impact of these factors on participants’ perception and response towards robots and virtual characters [6, 25, 37, 81]. While many studies advocate the positive role of embodiment, particularly physical embodiment [7, 40, 82], in enhancing human-agent interactions, this topic remains a subject of ongoing exploration, marked by varying and sometimes contradictory findings. For instance, Hasegawa et al. claimed that while embodiment positively affected participants’ perceptions, it showed no significant influence on performance [28]. Although the effects embodiment and politeness on social and collaborative interactions in physical and virtual environments have been previously explored [49, 69, 71, 75, 76, 78, 79], few studies [12, 14, 18, 51, 53, 66, 74] focused on inviting newcomers to join small groups, which can be challenging due to exclusive behaviors and social anxiety. Therefore, we explore how politeness behaviors exhibited by (non-)human agents to invite individuals to join a small group can influence joining behaviors. This could be beneficial in public events where humanoid robots or virtual characters help newcomers overcome barriers related to social anxiety in approaching and joining a small conversational group by promoting an inclusive and approachable atmosphere for all participants.

In this paper, we examine how agents’ different embodiment and politeness behaviors influence newcomers’ decisions to join small groups. For this, we focus on the combined effects of embodiment (human, humanoid robot, virtual character) and politeness behaviors (baseline – not doing the act, indirect – using indirect language, positive – emphasizes friendliness and camaraderie) on human group joining behavior within small groups. To investigate human behaviors in such settings, we conducted a controlled laboratory experiment (N = 54) to assess the persuasive impact of agents’ requests to join a group at a specific side and how this influences participants’ actions and perceptions of these requests. In the experiment, participants faced a social dilemma, requiring them to choose among three options: (1) investing more effort to join the group through a socially acceptable route, aligning with the agent’s request, (2) opting for the path of least effort, which is an unsocial route involving walking directly through the center of the group, and (3) choosing a convenient route to join the group at its closest side, thereby striking a balance between effort and social acceptance. Our results showed that embodiment did not significantly affect the persuasiveness of the agent or how polite it was perceived to be, whereas politeness behaviors did. Specifically, employing direct and explicit politeness strategies showed greater effectiveness in persuading participants to join the group at its furthest side. We found that participants took less time to finish walking in the presence of humans than the other two agents and in the presence of indirect behavior than the other two. Moreover, the path length participants walked was longer in the presence of proposing behavior than the other two, but there was no difference for the type of agents. Participants’ final distance to the main agent was shorter with robots than the other two agents and with proposing behavior than the other two. Lastly, participants’ final distance to the secondary agent was shorter with robots and virtual characters than with humans and proposing and indirect behaviors. Our main research contribution includes an empirical evaluation of embodiment and politeness behaviors on group joining behaviors and walking patterns of humans.

Skip 2RELATED WORK Section

2 RELATED WORK

In this section, we outline and connect our approach to the existing body of work about group and proxemics theory, embodiment, and politeness theory.

2.1 Group and proxemics theory

When individuals come together to engage in communication as a group, they create what is known as a “free-standing conversational group” [67]. The management of space within and between individuals in a group has been a subject of investigation in several studies [27, 36, 41, 46, 64]. These spatial dynamics within a group have been explored extensively, with Kendon’s theory of “F-formation” [36] providing insights into how individuals organize themselves in a group context. F-formation delineates the arrangement of a group where all members have equal, direct, and exclusive access to the group’s social space. Within F-formation, there are three distinct social spaces: “o-space”, “p-space”, and “r-space”. “O-space” represents a convex, empty space enclosed by individuals engaged in social interaction, and it is exclusive to the members of the group. This study focuses on how humans apply these spatial concepts when interacting with a group of agents, i.e., humans, robots, and virtual characters, with particular attention to their behavior concerning the avoidance of crossing the boundaries of “o-space” when joining the group. Moreover, Hall [27] categorized the area around individuals into four specific zones: (1) intimate space (0-45 cm), (2) personal space (45-120 cm), (3) social space (120- 365 cm), and (4) public space (> 365 cm). According to this theory, social interactions among acquaintances primarily take place within the social space, while the personal space is reserved for interactions among close friends or family members. Consequently, in this work, we deliberately selected the onset of the social space zone [27, 35, 88] as a point that presents individuals with a dilemma: whether to navigate through the group’s o-space or circumvent it. Additionally, the investigation aims to determine whether participants comply with the agent’s invitation and join the specific side that the agent encourages them to join. These theories have been applied to computational modeling and artificial systems in situations where a newcomer approaches and joins a small group, such as computational analysis methods [18, 66, 74], social simulations [14, 53], proxemics [12, 51, 55] and datasets [84] concerning small groups. These endeavors have laid the foundation for developing artificial models utilized by virtual characters and mobile robots. These agents may function as group members who need to accommodate newcomers or can join a group while being socially aware. This awareness may involve considering social body cues or striving to minimize disruptions within the group [3, 24, 52, 73, 85, 86, 87].

2.2 Embodiment

Embodiment [40], within the realm of artificial intelligence and robotics, encompasses various forms that dictate an agent’s interaction and integration within its environment. Notably, thoughts, feelings, and behaviors are intricately linked to bodily interactions with the environment and this impacts social behaviors and psychological experiences of individuals [43]. Embodiment encompasses an agent’s morphology, interaction dynamics, and engagement capabilities with its environment [19, 23]. One categorization of embodiment spans from physical embodiments, involving robots with tangible bodies constructed from materials like metal or plastic, to virtual embodiments represented by animated characters in computer graphics. Within these classifications, anthropomorphism [22], a tendency to attribute human-like traits to non-human entities, plays a pivotal role in shaping interactions of humans with technology. The Computer as Social Actors (CASA) paradigm [39, 48, 59] further extends this notion, emphasizing how agents, whether physical or virtual, portray human-like behavior to engage meaningfully with users. Anthropomorphism, as a facet of embodiment, influences how humans perceive and interact with agents, shaping their expectations, comfort levels, and responses in social settings [15, 21]. Moreover, this paradigm underscores the importance of agents’ social attributes and behaviors in driving successful human-agent interactions, aiming to bridge the gap between technological entities and human users by fostering relatable and effective communication. In this study, we selected three types of embodiments—humans, humanoid robots, and virtual characters—to embody three distinct politeness behaviors. This choice aimed to ascertain potential differences between human and non-human agents (robots and virtual characters) concerning their effectiveness in inviting individuals to join a small group. Such investigations are valuable in scenarios where the substitution of human agents with non-human agents, such as robots or virtual characters, is necessary, particularly in contexts like hospitality services. Understanding the efficacy of these diverse agents in social interactions can aid in finding appropriate solutions for various scenarios requiring non-human involvement.

2.3 Politeness theory

The CASA (Computers Are Social Actors) paradigm [39, 48, 59] highlights that individuals attribute human-like qualities to computers and other technological devices. Consequently, they incorporate elements relevant to human interactions, such as politeness [47], into their interactions with these devices. Hence, the development of polite behaviors for artificial agents, i.e., robots or virtual characters, is crucial to establishing and maintaining a positive user perception of agents, fostering rapport [72] and facilitating long-term collaborations in group scenarios [17]. Several studies in the domain of Human-Computer Interaction (HCI) have explored the impact of politeness across various contexts such as dialogue management and conversations [44, 45, 70], machine translation [65], assessing politeness levels in peer reviews [9], communication styles and interaction contexts [63], mental health and legal applications [57], and social companionship for older adults [30]. Notably, politeness assumes a critical role in shaping social interactions and behaviors within groups. Brown and Levinson [13] introduced the concept of politeness, which involves efforts to prevent or mitigate actions that could harm an individual’s public self-image or face [26]. They identified five distinct strategies for expressing needs while minimizing face-threatening acts:

(1)

Not doing the act (NOT): Avoiding the action altogether. Example: In this case, a room is hot and multiple individuals are present in it, if one person feels uncomfortable due to the heat and wishes to open the window for fresh air but refrains from doing so, it might be due to their reluctance to inconvenience others or their desire to maintain politeness, even if they are uncomfortable.

(2)

Indirect (IND): Using indirect language or an off-record approach. The off-record strategy uses indirect language, avoiding imposition on the listener. It involves expressing something general or different from the speaker’s true intent, relying on the listener’s interpretation of the request, and willingness to assist in fulfilling that. Example: “It is hot in here!” The listener might interpret this as a subtle suggestion to open the window for fresh air.

(3)

Negative politeness (NEG): Focusing on avoiding imposition or intrusiveness and respecting other people’s need for autonomy. Example: “It feels warm in here. Would you mind opening the window, if that’s okay?” The speaker expresses concern for the warmth without directly imposing it on the listener and politely asks if they would consider opening the window, respecting their autonomy and freedom.

(4)

Positive politeness (POS): Emphasizes friendliness and camaraderie and seeks to establish a warm and friendly rapport in communication. Example: “It seems that you feel hot. Would you like to open the window?” By emphasizing the needs of the listener, the speaker warmly asks them to open the window.

(5)

Direct (DIR): Using clear and direct language. Example: “Open the window!” This statement is straightforward and concise, making a direct request to open the window.

Given that previous work has shown that positive politeness strategy is more effective in persuading individuals to join small groups while keeping a positive impression than negative and direct politeness strategies [35, 88, 90], within the scope of this paper, we instead focus on three politeness behaviors: (1) NOT (not doing the act), (2) IND (using indirect language), and (3) POS (emphasizes friendliness and camaraderie).

The exploration of the interplay between embodiment and politeness strategies, as well as their impact on individuals joining groups of humans, robots, and virtual characters, remains under-explored. The assistance of humanoid robots or virtual characters to aid newcomers in overcoming social anxiety barriers by facilitating their approach and joining small conversational groups in public gatherings could be beneficial to foster an inclusive and welcoming atmosphere for all participants which could lead to a pleasant long-term interaction between group members. This study seeks to fill this gap by examining how agents’ embodiments, i.e., humans, robots, or virtual characters, and politeness strategies influence a newcomer’s decision to join a group at a specific side while considering their subsequent perception of the agent’s request. Additionally, the study investigates how these factors impact the trajectories of human participants as they join the group and explores the potential influence of social presence associated with different embodiments on various study variables.

Skip 3METHODOLOGY Section

3 METHODOLOGY

To systematically investigate how agents’ different embodiment and politeness strategies influence newcomers’ decisions to join a group, we conducted a controlled laboratory experiment to assess the persuasive impact of agents’ requests to join a group at a specific side and how this influences participants’ actions and perceptions of these requests. With this experiment, we addressed the following three research questions. The formulation of the following research questions stemmed from acknowledging the nuanced interplay between embodiment, politeness, and human social behaviors observed in the existing literature.

RQ1

“To what extent does an agent’s embodiment and politeness behavior influence the social behavior of humans when invited to join a small group?”

The impetus for this question arises from research emphasizing the significance of embodied agents in human-agent interactions [40]. Additionally, the lack of comprehensive studies exploring the specific impact of politeness strategies [13], particularly in the context of group joining behaviors, underscores the need for deeper investigation into their interplay in inviting individuals to join a group.

RQ2

“How does an agent’s embodiment and politeness behavior affect the perceived politeness of the invitation to join a small group?”

This question is rooted in the essential role of politeness strategies [13] in communication and their implications on social interactions. Existing literature emphasizes the importance of politeness in minimizing face-threatening acts [26] and maintaining social harmony, yet specific investigations into its impact on group joining behavior are scarce.

RQ3

“What is the relationship between an agent’s embodiment, its perceived social presence, and the behavior of individuals when joining a small conversational group?”

The exploration of social presence [11] concerning embodiment draws from previous studies examining the impact of agents’ physical or digital forms on interaction dynamics [40]. However, limited research has focused on the intricate relationship between embodiment, social presence, and individuals’ behaviors when joining small conversational groups.

In this study, the adaptation of verbal behaviors of the agents is grounded in Brown and Levinson’s politeness theory [13], closely aligning with its principles. However, unlike the original theory, which primarily focused on linguistic aspects of politeness, this research also incorporates non-verbal aspects of behavior. While drawing inspiration from Brown and Levinson’s work, this study extends its scope by exploring the perception of these behaviors exhibited by three types of embodiment, a dimension not directly addressed in the original theory. Furthermore, the discussion of a social dilemma in later sections (see Section 3.2) adds another layer of contribution, showcasing how aspects of this theory can be practically applied in the domain of HCI, particularly in small group settings. Specifically, it examines how individuals respond to a number of politeness strategies during group tasks and whether they adhere to the agents’ requests or social norms, such as refraining from walking through the center of a group, which are not part of the original theory.

3.1 Study Design

To answer these questions, a within-subject study was designed with two independent variables: (1) embodiment and (2) politeness behavior.

Figure 2:

Figure 2: Main agent’s (A1) behaviors: In each scenario, A1 invites participants to join the group, employing a combination of verbal and non-verbal behaviors aligned with politeness strategies drawn from the theory: (a) Baseline (BSL); (b) Indirect (IND); (c) Proposing (PRO). The experiment incorporates three different embodiments (humans, robots, and virtual characters) arranged from left to right.

Table 1:
Politeness BehaviorsStrategyVerbal behaviorNonverbal behavior
1. Baseline (BSL)NOTNoneNone*
2. Indirect (IND)IND“Welcome back!”Open palm up
3. Proposing (PRO)POS“This place is waiting for you!”Open palm sideways and partly downward

Table 1: Experiment politeness behaviors and corresponding politeness strategies, verbal and nonverbal behaviors derived from the theory. *Note: In all conditions, agent 1 (A1) consistently maintained eye contact with the participant. The specified verbal and nonverbal behaviors were consistently executed in an identical manner by A1 (see Figure 2).

3.1.1 Embodiment.

We utilized three distinct embodiments: humans, robots, and virtual characters (Figure 2). This deliberate selection was made to examine potential differences between human agents and two distinct categories of non-human agents – humanoid robots and virtual characters – in their effectiveness at inviting individuals to join a small group. Additionally, within the category of non-human agents, we considered both physically embodied agents (humanoid robots) and virtually embodied ones (virtual characters). Existing literature [7, 82] suggests that physical robots tend to have a greater influence than virtual characters, prompting our consideration of both types for a comprehensive evaluation. The experiment involved positioning two agents in a face-to-face group formation, with a distance of 125 cm separating them, within a room measuring 5.5 by 5 meters, as depicted in Figure 3. This distance marks the initial boundary within social space [27] and is deliberately selected based on prior research [35, 88] as a point that presents individuals with a dilemma: whether to navigate through the group’s o-space or circumvent it. The main agent, denoted as A1, was positioned facing the participants and extended invitations to join the group by employing a combination of verbal and non-verbal politeness behaviors.Throughout the entire duration of each trial, A1 consistently maintained eye contact with the participants. Moreover, as participants commenced their approach towards the group, the secondary Agent (A2) also initiated and sustained eye contact with them. The design of this approach was informed by prior studies [35, 88], aiming to establish a welcoming and inclusive environment for participants to join the group. It was crafted to illustrate the agents’ openness, anticipation, and readiness for participants to become part of their group.

3.1.2 Politeness behaviors.

We created three distinct politeness behaviors, informed by three politeness strategies [13]. We have validated these strategies and their associated behaviors in four previous user studies with virtual characters and humanoid robots in physical and virtual environments [35, 88, 89, 90]. Our findings showed that participants consistently perceived a good alignment between the designed verbal and nonverbal behaviors and the specific politeness strategies derived from Brown and Levinson’s politeness theory [13]. These behaviors encompass varying levels of politeness, verbal and non-verbal, as detailed in Table 1. With these politeness behaviors, we aimed to invite participants to join a group of two agents: (1) NOT (not doing the act), (2) IND (employing indirect language to make a request rather than directly asking for it), and (3) PRO (emphasizing positive politeness by attending to the hearer’s interests, needs, and wants).

The objective of the “not doing the act (NOT)” behavior was to observe the natural behavior of individuals when there was no explicit invitation by the agent to join a specific side of the group. Hence, we selected the Baseline behavior to examine individuals’ innate inclination to join the group without any indication regarding which side to join. This followed the first politeness strategy of refraining from any action (Figure 2 BSL). The indirect behavior (IND) was chosen as an implicit way of inviting participants to join a particular side of the group. Consequently, in the Indirect behavior, the agent subtly indicated the preferred side by welcoming participants using an arm gesture (see Figure 2 IND). This gesture did not specify a particular side, only utilizing an arm on the same side as the agent wanted the participants to join. Therefore, both the verbal and non-verbal components of this behavior remained indirect. This indirect approach allows the participants to infer the agent’s intention and decide whether to comply without feeling explicitly pressured. We considered three options (negative, positive, and direct strategies) for an explicit way of inviting individuals to join the group. However, previous studies in virtual characters [35, 90] and social robotics [88] suggested that the positive politeness strategy is the most effective approach to invite participants to join a group at a particular side while maintaining a positive impression on participants. Consequently, the positive politeness strategy (proposing behavior) was selected as the explicit way for inviting participants to join the group to offer higher directness and clarity, both verbally and non-verbally, indicating the specific side where the agent wanted participants to join (Figure 2 PRO). It aligns with attending to the hearer’s interests, needs, and wants, suggesting that there is a designated place reserved for participants, indicating consideration for their presence within the group.

To systematically investigate these independent variables, all combinations of politeness behaviors and embodiments were integrated, creating nine distinct experimental conditions. The experiment was organized into three blocks, each corresponding to one embodiment, i.e., humans, robots, or virtual characters. The sequence in which these embodiments were presented and the behaviors within each block, was randomized using a Balanced Latin square. Furthermore, to minimize the potential influence of gender on participant behavior, exclusively female agents were employed when interacting with female participants, and solely male agents were engaged when interacting with male participants. Lastly, with the within-subjects design in this study, we aimed to enable participants to experience and compare all conditions, minimizing individual differences and enhancing statistical power [16].

3.2 Social dilemma

Given the types of embodiments and politeness strategies, we intentionally designed the study to present participants with a dilemma where they had to make a choice among three options:

(1)

Opting for a socially acceptable but more effortful approach to join the group, complying with the agent’s request. This involved taking an inconvenient route around the group, which required approximately 12 steps.

(2)

Choosing the path of least effort, unsocial route, by walking directly through the center of the group, which meant violating the group’s o-space. This option required about 8 steps but still adhered to the agent’s request.

(3)

Selecting a convenient route, involving only about 4 steps to join the group at the closest side. This choice balanced effort and social acceptance but conflicted with the agent’s invitation to join at the furthest side (as outlined in Table 2).

This dilemma enabled the examination of participants’ joining behavior within the group, considering both their decision to join (in response to persuasion and complying with the agent’s request) and their approach to doing so (adhering to social norms while walking), which involved varying levels of effort. The social acceptability of these three potential routes was founded on established social norms and derived from Kendon’s theory [36]. Kendon’s theory suggests that people generally refrain from walking through a group’s o-space when adequate space exists around it. The specific dilemmas presented in this study aim to investigate how individuals manage conflicting social norms while taking into account polite invitations from different agents and the associated effort implications. Table 2 succinctly summarizes these dilemmas, encapsulating the varying trade-offs related to social norms, levels of persuasion in politeness behaviors, and effort considerations. Furthermore, the dilemma indicates the extent to which individuals perceive artificial agents (i.e., humanoid robots and virtual characters) as social entities (akin to humans) and conform to their requests. Also, it explores whether the politeness exhibited by these agents could impact participants’ decisions regarding their behavior in joining the group. It is noteworthy to mention that participants were not incentivized to adhere to the requests made by the agents or to abstain from walking through the group’s o-space. This exploration is to address research question 1 of the study. Importantly, participants had complete freedom to decide where and how they wished to join the group, and none of these three potential routes were presented to them.

Figure 3:

Figure 3: Experiment room: S represents the initial spot for each trial where participants started their movements. The Closest (C) and furthest (F) sides of the group for joining from S are distinguished by green and blue circles, respectively. Additionally, three distinct hypothetical routes for joining the group are depicted, each associated with a specific color: green for inconvenient, red for unsocial, and black for convenient. In all conditions (except for BSL), A1 (main agent) invited participants to join at the furthest side (F).

Table 2:
RoutesPersuasionSocial adherenceEffort
ConvenientNoYesLow
UnsocialYesNoMedium
InconvenientYesYesHigh

Table 2: Various alternatives and their associated trade-offs from the participants’ perspective, as illustrated in Figure 3.

3.3 Apparatus

3.3.1 Trajectory and experiment control.

We utilized HTC VIVE Pro headsets and controllers to record trajectory data and control the experiment. To facilitate participants’ movement within the room, HTC VIVE wireless adapters were attached to the headsets and powered by wearable battery packs (Figure 1). An application was developed using the Unity 3D game engine1 to capture trajectory data from the VR headset. Additionally, this application allowed the experimenter to play a beep sound via the VR headset to signal the start of each trial for participants. Participants carried the VR headset throughout all trials and could end each trial by pressing the trigger button on the VR controller. Furthermore, for the virtual characters block of the experiment in which participants were fully immersed in VR, an indoor virtual room mirroring the physical room’s features was developed using the Unity 3D game engine. The virtual starting position aligned accurately with the physical environment, ensuring a consistent spatial reference.

3.3.2 Embodiment.

Four humans (2 females and 2 males) were recruited and underwent training to replicate the behaviors of the artificial agents, i.e., robots and virtual characters, as faithfully as possible. They were instructed to maintain consistent appearances throughout the entire study. The primary human agents’ appearances and behaviors are illustrated in Figure 2. Female participants exclusively interacted with female agents, while male participants interacted exclusively with male agents. The primary female agent was 25 years old, with a height of 162 cm, while the primary male agent was 24 years old and had a height of 173 cm. In addition, the experiment involved two Pepper robots 2, each with a height of 120 cm. The Pepper robot comes equipped with an integrated tablet positioned on its chest, which has the potential to distract participants from their primary tasks in this study. Hence, we opted to utilize a T-shirt specifically to conceal this tablet for the primary robot. The primary robot is depicted in Figure 2. Within the virtual indoor room of the experiment, two virtual characters, specifically Greta virtual agent3 [50], were positioned precisely where the physical agents had been located, following their orientations. CereProc text-to-speech4 was utilized for speech generation in these virtual characters. The study employed two male and two female Greta agents. Female participants exclusively interacted with female characters, while male participants interacted exclusively with male characters. The primary male virtual character had a height of 173 cm, matching the height of the human male agent, and the female virtual character had a height of 163 cm, matching the height of the human female agent. Detailed information about these Greta virtual agents, including their appearances and behaviors, can be found in Figure 2.

3.3.3 Participant Data Collection.

Participants were equipped with a tablet for providing feedback through between-trial, between-block, and post-study questionnaires. This tablet was conveniently placed on a table adjacent to the experimental area (Figure 3). Additionally, at the commencement of the study, participants entered their demographic information using the same tablet.

3.4 Measures

3.4.1 Joining behavior.

Participants’ joining behavior is evaluated using six different metrics.

(1)

Persuasiveness: The study recorded participants’ compliance with the agent’s request to join the group at the furthest side during each trial. This data was utilized to determine the effectiveness of a behavior or embodiment in terms of persuasiveness.

(2)

Social adherence: Participants who chose to join at the furthest side had two alternatives: either walking between the two agents and crossing the group’s “o-space”, or walking around them while respecting the group’s “o-space”. This information was analyzed to quantify participants’ adherence to social norms in their joining behaviors.

(3)

Path length: This variable comprises the length of each trajectory (measured in meters) participants followed from their initial position, denoted as “S” in Figure 3, to the point where they activated the trigger button on the VR controller, indicating that they considered themselves to have joined the group.

(4)

Path duration: This variable indicates the duration (measured in seconds) of each trajectory, starting from the initiation of each trial (marked by the sounding of a beep for participants) to the moment they perceived themselves as part of the group and pressed the trigger button on the VR controller.

(5)

Distance to main agent: This variable stores the distance measured in meters between the point where participants ceased their movement and considered themselves part of the group and the location of the main agent in the environment.

(6)

Distance to secondary agent: This variable is calculated in the same manner as the previous one, but it represents the distance between the point where participants stopped their movement and felt part of the group and the location of the secondary agent in the environment.

3.4.2 Perceived politeness.

This was assessed at the end of each trial using a brief questionnaire, with each question designed to measure one of the four dependent variables related to perceived politeness, namely, understanding, offense, intimacy, and respect. Participants were asked to indicate their level of agreement with the following questions on a 5-point Likert scale, ranging from “strongly disagree” to “strongly agree”. Additionally, participants had the option to provide brief text-based comments if they wished to do so.

(1)

I could precisely understand the agent’s wants.

(2)

I got offended by the agent’s action.

(3)

The agent wanted to increase intimacy with me.

(4)

The agent respected my freedom of action.

Question 1 was created to assess how clearly the agent’s request was understood (i.e., understanding). Question 2 aimed to gauge whether the agent’s request caused any offense or loss of face. Question 3 was intended to determine the degree of satisfaction related to positive face (such as intimacy or warm behavior). The final question, question 4, was designed to evaluate the level of satisfaction concerning negative face (e.g., respect for one’s choices, freedom of action, or perceived cold behavior).

3.4.3 Social presence.

This was evaluated using an 18-item social presence questionnaire [54] that specifically targeted the agent’s social presence. Participants were requested to express their level of agreement with the provided statements using a 5-point Likert scale, ranging from “strongly disagree” to “strongly agree”. This assessment allowed us to measure six variables: Co-Presence, Attentional Allocation, Message Understanding, Behavioral Interdependence, Affective Understanding, and Affective Interdependence.

3.4.4 Post-study questionnaire.

After the experiment, participants were requested to fill out a post-study survey to provide feedback on their overall experience. Finally, following their feedback submission, participants were instructed to approach each of the primary agents and position themselves at their preferred distance for initiating a conversation. This step allowed us to assess the potential influence of cultural differences in personal space preferences and the potential impact of height on the distance participants maintained from each agent.

3.5 Procedure

After collecting participants’ demographic data, we introduced them to the experimental setup and human and non-human agents. Participants had time to familiarize themselves with the robots and wear the VR headset to acquaint themselves with the virtual room and its characters. For the experiment, participants were asked to move to the initial location (S) at the beginning of each trial, facing the group of agents in front of them and initiating their movement to join the group only after they heard a beep signal from the VR headset. The beep, triggered by the experimenter from the control room, ensured that the main agent A1 could complete its politeness behavior before participants started to move toward the group. The study employed a distinct setup to minimize the impact of VR headset use on participants’ interactions across different conditions. Participants carried the VR headset with them throughout all trials, yet the method of use varied depending on the interaction type. During trials involving virtual characters, they placed the VR headset in front of their eyes to fully immerse themselves in VR. In contrast, in the case of trials with robots and humans, the VR headset was positioned on top of their heads and not in front of their eyes. This setup was used to record their trajectories during each trial comprehensively. Participants were instructed to end each trial by pressing the trigger button on the VR controller, which they had with them throughout all trials when they believed they had successfully joined the group. After concluding each trial by joining the group, participants were instructed to return to their initial location. They were then required to leave the VR equipment on the table beside the experiment room and respond to the four perceived politeness questions. Once they had completed this, they were to stand at the initial location and wait for the next trial to commence. Additionally, at the end of each block, which consisted of three trials involving one embodiment, participants were asked to complete the between-block questionnaire, which focused on the social presence of the embodiment they had experienced. Participants were informed that only the experimenter controlled the start of movement to ensure technical functionality. However, participants had the full autonomy to end each trial by pressing the trigger button, signifying their decision to conclude that specific trial. There were no specific requirements regarding how they should end each trial, and they were free to do so however they preferred.

The initial positioning and angles of both the group and the participants were chosen to create a direct, least effort, and convenient path, allowing participants to join the group at the closest side with minimal effort. However, A1 typically invited participants to the opposite side of the group, which necessitated a more effortful and inconvenient route to reach, approximately twice the distance. The participants’ task involved commencing from a distance from the group, S, and then freely navigating the environment to join the group, see Figure 3. Participants were informed that all the agents were entirely autonomous. However, in reality, all the agents, including the robot and embodied conversational agents, were controlled by the experimenter from the control room. The operation of the robot and embodied conversational agents was carried out by a human operator using a semi-automated approach that adhered to the Wizard of Oz (WoZ) methodology [60]. After the study, participants were informed that the robots and embodied conversational agents, initially portrayed as entirely autonomous, were in fact operated by a human operator to maintain transparency and ethical guidelines in the research process. Lastly, the human agents received training to mimic the behaviors of the artificial agents as closely as possible. Overall, the duration of a single experiment ranged from 45 to 60 minutes, with each individual interaction involving the agent lasting between 2 and 36 seconds.

3.6 Participants

A total of 54 participants (27 male, 27 female) aged between 18 and 69 years (M = 33, SD = 11) and proficient in English were recruited for this study. Among the participants, 83% had little or no prior experience with VR and 94% had infrequent VR usage, i.e., a few times a year. Additionally, 85% of the participants had little or no familiarity with robots, and 93% reported interacting with robots infrequently or only occasionally. All data collected from participants was anonymized, and informed written consent was obtained from each participant before the commencement of the experiment. Each participant underwent nine trials, resulting in a total of 486 trials used for the final analysis.

3.7 Data Analysis

Given the non-parametric nature of the collected data, we applied the aligned rank transform for non-parametric factorial analyses [80]. Therefore, we applied an Aligned Rank Transform (ART) ANOVA for all statistical analyses presented below. For pairwise comparisons, we used a Bonferroni correction. For the correlations, we used Spearman’s rank correlation coefficient.

Skip 4RESULTS Section

4 RESULTS

Figure 4:

Figure 4: Persuasiveness grouped by type of agent and behavior, encompasses three distinct scenarios: the count of instances where participants joined the group at the closest side without crossing the o-space via a convenient route (Closest/No), joined at the furthest side without crossing the o-space via an inconvenient route (Furthest/No), and joined at the furthest side while crossing the o-space of the group via an unsocial route (Furthest/Yes).

Figure 5:

Figure 5: Overview of results: means and standard errors for path duration, path length, distance to the main agent, and distance to the secondary agent. VC = Virtual Character. BSL = Baseline, IND = Indirect, PRO = Proposing

The results indicated that politeness behaviors had a more significant impact on persuasiveness and the perception of the agent’s politeness, whereas embodiment did not show a notable effect. We found that participants took less time to finish walking in the presence of humans than the other two agents and in the presence of indirect behavior than the other two. Moreover, the path length participants walked was longer in the presence of proposing behavior than the other two, but there was no difference for the type of agents. Participants’ final distance to the main agent was shorter with robots than the other two agents and with proposing behavior than the other two. Lastly, participants’ final distance to the secondary agent was shorter with robots and virtual characters than with humans and proposing and indirect behaviors. We outline the results in detail in the following subsections.

4.1 Joining Behavior

4.1.1 Persuasiveness and Social Adherence.

We counted the number of times participants joined the group of agents at the furthest side while walking between the two agents (by taking an unsocial route) or walking around them (by taking an inconvenient route). Additionally, they had an option to join at the closest side (by taking a convenient route). We illustrate these behaviors in Figure 4.

4.1.2 Path Length.

The path length participants walked was comparable in the presence of humans (\(Md = 2.75 m, IQR = 1.86\)), robots (\(Md = 2.9 m, IQR = 1.18\)), and virtual characters (\(Md = 2.8 m, IQR = 1\)). This finding was supported by the non-statistically significant main effect for the type of agent (F(2, 106) = 2.9, p > 0.05, η2 = 0.05). As for the type of behavior, the path length participants walked was longer in the presence of proposing behavior (\(Md = 4.2 m, IQR = 1.88\)) than indirect (\(Md = 2.7 m, IQR = 0.54\)) and baseline (\(Md = 2.71 m, IQR = 0.53\)) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 32, p < 0.001, η2 = 0.37). The post-hoc analysis has shown statistically significant differences between the proposing behavior and two other types (p < 0.001). However, there were no statistically significant differences between baseline and indirect behaviors (p > 0.05). Finally, we observed a statistically significant interaction effect for agent * behaviors (F(4, 212) = 4.4, p < 0.01, η2 = 0.08). However, none of the pairwise comparisons were statistically significant (p > 0.05) due to the p-value correction.

Figure 6:

Figure 6: Perceived politeness scales, categorized by the type of agent and behavior: Understanding, offense, intimacy, and respect. Strongly disagree indicates low understanding/offense/intimacy/respect, and strongly agree – high understanding/offense/intimacy/respect. VC = Virtual Character, R = Robot, H = Human, PRO = Proposing, IND = Indirect, BSL = Baseline.

4.1.3 Path Duration.

Participants took the least time to finish walking in the presence of humans (\(Md = 7 sec, IQR = 3.36\)) than robots (\(Md = 8.43 sec, IQR = 4.42\)) and virtual characters (\(Md = 8.7 sec, IQR = 4.1\)). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 17.1, p < 0.001, η2 = 0.24). The post-hoc analysis has shown statistically significant differences between humans and the other two types of agents (p < 0.001). However, no statistically significant differences existed between robots and virtual characters (p > 0.05). As for the type of behavior, we found that participants took the least time to finish walking in the presence of indirect behavior (\(Md = 7.09 sec, IQR = 3.75\)) than with baseline (\(Md = 8.01 sec, IQR = 4.6\)) and proposing (\(Md = 8.5 sec, IQR = 3.74\)) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 10.5, p < 0.001, η2 = 0.17). The post-hoc analysis has shown statistically significant differences between indirect and other two types of behavior (p < 0.001). However, there were no statistically significant differences between baseline and proposing behaviors (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 1.66, p > 0.05, η2 = 0.03).

4.1.4 Distance to the main agent.

Participants’ final distance to the main agent was shorter with robots (\(Md = 1.1 m, IQR = 0.34\)), followed by virtual characters (\(Md = 1.17 m, IQR = 0.36\)) and humans (\(Md = 1.24 m, IQR = 0.3\)). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 22.3, p < 0.001, η2 = 0.29). The post-hoc analysis has shown statistically significant differences between robots and the other two types of agents (p < 0.001). Moreover, participants’ distance to the virtual characters was statistically significantly shorter than to humans (p = 0.045). As for the type of behavior, we found that participants’ final distance to the main agent was shorter in the presence of proposing behavior (\(Md = 1.11 m, IQR = 0.35\)) than with indirect (\(Md = 1.16 m, IQR = 0.36\)) and baseline (\(Md = 1.19 m, IQR = 0.39\)) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 10.7, p < 0.001, η2 = 0.17). The post-hoc analysis has shown statistically significant differences between proposing and baseline behavior (p < 0.001), and between proposing and indirect behavior (p = 0.012). However, there were no statistically significant differences between baseline and indirect behaviors (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 0.76, p > 0.05, η2 = 0.014).

4.1.5 Distance to the secondary agent.

Participants’ final distance to the secondary agent was shorter with robots (\(Md = 1.21 m, IQR = 0.26\)), followed by virtual characters (\(Md = 1.21 m, IQR = 0.28\)) and humans (\(Md = 1.27 m, IQR = 0.24\)). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 3.7, p = 0.027, η2 = 0.065). The post-hoc analysis has shown statistically significant differences between robots and humans (p = 0.039), but not between virtual characters and humans (p > 0.05) and virtual characters and robots (p > 0.05). As for the type of behavior, we found that participants’ final distance to the secondary agent was shorter in the presence of proposing behavior (\(Md = 1.23 m, IQR = 0.23\)), followed by indirect (\(Md = 1.22 m, IQR = 0.25\)) and baseline (\(Md = 1.25 m, IQR = 0.32\)) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 3.25, p = 0.042, η2 = 0.057). The post-hoc analysis has shown statistically significant differences between proposing and baseline behavior (p = 0.45) but not between proposing and indirect behavior (p > 0.05) and baseline and indirect behaviors (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 1.33, p > 0.05, η2 = 0.024).

4.2 Perceived Politeness: Understanding, Offense, Intimacy, Respect

4.2.1 Understanding.

Participants’ understanding of the agent’s invitation was higher with humans (Md = 4, IQR = 2), followed by virtual characters (Md = 4, IQR = 2) and robots (Md = 4, IQR = 2). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 4.46, p = 0.013, η2 = 0.07). The post-hoc analysis has shown statistically significant differences between robots and humans (p = 0.01), but not between virtual characters and humans (p > 0.05) and virtual characters and robots (p > 0.05). As for the type of behavior, we found that participants’ understanding of the agent’s invitation was higher in the presence of proposing behavior (Md = 5, IQR = 1), followed by indirect (Md = 4, IQR = 1) and baseline (Md = 2, IQR = 2.5) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 123, p < 0.001, η2 = 0.7). The post-hoc analysis has shown statistically significant differences between all pairs (p < 0.001). Finally, we observed a statistically significant interaction effect for agent * behaviors (F(4, 212) = 3.3, p = 0.011, η2 = 0.058). However, none of the pairwise comparisons were statistically significant (p > 0.05) due to the p-value correction.

Figure 7:

Figure 7: Overview of six scales of social presence. The maximum value for all scales is 5, with higher values indicating a more favorable result. VC = Virtual Character.

4.2.2 Offense.

Participants got offended by the agent’s actions comparably with humans (Md = 1, IQR = 0), virtual characters (Md = 1, IQR = 0), and robots (Md = 1, IQR = 0). This finding was supported by the statistically non-significant main effect for the type of agent (F(2, 106) = 0.0007, p > 0.05, η2 = 0.001). As for the type of behavior, we found that participants got offended by the agent’s actions more in the presence of baseline behavior (Md = 1, IQR = 1), followed by proposing (Md = 1, IQR = 0) and indirect (Md = 1, IQR = 0) behaviors. This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 28, p < 0.001, η2 = 0.001). The post-hoc analysis has shown statistically significant differences between indirect and proposing (p < 0.001) and indirect and baseline (p < 0.001) behaviors, but not between baseline and proposing (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 1.37, p > 0.05, η2 < 0.001).

4.2.3 Intimacy.

Participants perceived that all agents wanted to increase intimacy with them comparably: humans (Md = 3, IQR = 2), virtual characters (Md = 3, IQR = 2), and robots (Md = 3, IQR = 2). This finding was supported by the statistically non-significant main effect for the type of agent (F(2, 106) = 1.6, p > 0.05, η2 = 0.03). As for the type of behavior, we found that proposing (Md = 4, IQR = 1) and indirect (Md = 4, IQR = 1) behavior led to a higher willingness to increase intimacy than baseline (Md = 2, IQR = 2). This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 51, p < 0.001, η2 = 0.49). The post-hoc analysis has shown statistically significant differences between baseline and proposing (p < 0.001) and baseline and indirect (p < 0.001) behaviors, but not between indirect and proposing (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 1.15, p > 0.05, η2 = 0.021).

4.2.4 Respect.

Participants perceived robots (Md = 5, IQR = 1) respecting their freedom of action more than humans (Md = 5, IQR = 1) and virtual characters (Md = 5, IQR = 1). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 13, p < 0.001, η2 = 0.19). The post-hoc analysis has shown statistically significant differences between robots and humans (p = 0.014) and robots and virtual characters (p < 0.001), but not between virtual characters and humans (p > 0.05). As for the type of behavior, we found that participants perceived indirect behavior (Md = 5, IQR = 1) respecting their freedom of action more than proposing (Md = 4, IQR = 1.75) and baseline (Md = 4, IQR = 1). This finding was supported by the statistically significant main effect for the type of behavior (F(2, 106) = 11.7, p < 0.001, η2 = 0.18). The post-hoc analysis has shown statistically significant differences between indirect and proposing (p < 0.001) and indirect and baseline (p < 0.001) behaviors, but not between baseline and proposing (p > 0.05). Finally, we did not observe a statistically significant interaction effect for agent * behaviors (F(4, 212) = 0.83, p > 0.05, η2 = 0.015).

4.3 Social Presence

4.3.1 Co-Presence.

Participants’ feeling of co-presence was higher with humans (Md = 5, IQR = 0.33) than with robots (Md = 4.3, IQR = 1) and virtual characters (Md = 4.6, IQR = 1) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 10.91, p < 0.001, η2 = 0.17). The post-hoc analysis has shown statistically significant differences between humans and robots (p < 0.001) and humans and virtual characters (p < 0.01), but not between virtual characters and robots (p = 0.7). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 1.8, p = 0.16, η2 = 0.033) and interaction effect for agent * behaviors (F(4, 212) = 0.09, p > 0.05, η2 = 0.001).

4.3.2 Attentional Allocation.

Similarly, participants’ attentional allocation was higher with humans (Md = 4.6, IQR = 1) than with robots (Md = 4.1, IQR = 1) and virtual characters (Md = 4, IQR = 1) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 4.8, p < 0.001, η2 = 0.08). The post-hoc analysis has shown statistically significant differences between humans and robots (p = 0.029) and humans and virtual characters (p = 0.017), but not between virtual characters and robots (p = 0.9). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 0.75, p = 0.47, η2 = 0.013) and interaction effect for agent * behaviors (F(4, 212) = 0.7, p > 0.05, η2 = 0.013).

4.3.3 Message Understanding.

Participants’ message understanding was higher with humans (Md = 3.6, IQR = 1.3) than robots (Md = 3, IQR = 1) and virtual characters (Md = 3, IQR = 1.3) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 23.5, p < 0.001, η2 = 0.3). The post-hoc analysis has shown statistically significant differences between humans and robots (p < 0.001) and humans and virtual characters (p < 0.001), but not between virtual characters and robots (p = 0.67). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 0.49, p = 0.6, η2 = 0.009) and interaction effect for agent * behaviors (F(4, 212) = 0.95, p = 0.43, η2 = 0.017).

4.3.4 Behavioral Inderdependence.

Participants’ behavioral interdependence was higher with humans (Md = 3.6, IQR = 1.3) than robots (Md = 3.3, IQR = 1.3) and virtual characters (Md = 3.3, IQR = 1.3) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 4.4, p = 0.014, η2 = 0.076). The post-hoc analysis has shown statistically significant differences between humans and robots (p = 0.023) and humans and virtual characters (p = 0.042), but not between virtual characters and robots (p = 0.96). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 1.2, p = 0.29, η2 = 0.022) and interaction effect for agent * behaviors (F(4, 212) = 2.9, p > 0.05, η2 = 0.05).

Figure 8:

Figure 8: Overview of correlations between categories of social presence and perceived politeness scale items.

4.3.5 Affective Understanding.

Participants’ affective understanding was higher with humans (Md = 3, IQR = 1.3) than robots (Md = 2, IQR = 1.3) and virtual characters (Md = 2, IQR = 1.6) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 29.9, p < 0.001, η2 = 0.36). The post-hoc analysis has shown statistically significant differences between humans and robots (p < 0.001) and humans and virtual characters (p < 0.001), but not between virtual characters and robots (p = 0.82). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 0.79, p = 0.45, η2 = 0.014) and interaction effect for agent * behaviors (F(4, 212) = 1.6, p = 0.15, η2 = 0.03).

4.3.6 Affective Interdependence.

Finally, participants’ affective interdependence was higher with humans (Md = 2.6, IQR = 1.6) than with robots (Md = 2, IQR = 1.6) and virtual characters (Md = 1.6, IQR = 1.3) (Figure 7). This finding was supported by the statistically significant main effect for the type of agent (F(2, 106) = 41.5, p < 0.001, η2 = 0.43). The post-hoc analysis has shown statistically significant differences between humans and robots (p < 0.001) and humans and virtual characters (p < 0.001), but not between virtual characters and robots (p = 0.31). However, we did not observe a statistically significant main effect for the type of behavior (F(2, 106) = 1.15, p = 0.31, η2 = 0.021) and interaction effect for agent * behaviors (F(4, 212) = 2.09, p = 0.08, η2 = 0.038).

Table 3:
Block (trials)I (1-3)II (4-6)III (7-9)Total
Requested108108108324
Successful314243116
Success rate29%39%40%36%
Respecting o-space18252366
Social adherence rate58%60%53%57%

Table 3: Breakdown of the number of trials per block in which the main agent (A1) requested and successfully persuaded participants to join the group at the furthest side, and of the related number of trials in which participants did not walk through the o-space of the group. In total, participants were successfully persuaded to join the group at the furthest side in 36% of the trials. Among these cases, participants refrained from crossing the o-space in 57% of the trials overall. Note: That calculation excludes the BSL condition because the agent did not make any request to participants in those instances.

4.4 Correlations

We discovered a statistically significant correlation between co-presence and understanding (rs = 0.19, p = 0.014), offense (rs = −0.24, p < 0.01), intimacy (rs = 0.2, p < 0.001), and respect (rs = 0.31, p < 0.001). We also found a statistically significant correlation between attentional allocation and respect (rs = 0.15, p = 0.049), but not for understanding (rs = 0.018, p = 0.8), offense (rs = −0.15, p = 0.052), and intimacy (rs = 0.11, p = 0.14). Message understanding has also statistically significant correlated with understanding (rs = 0.24, p = 0.0015) and respect (rs = 0.3, p < 0.001), but not with offense (rs = −0.09, p = 0.21) and intimacy (rs = −0.005, p = 0.95). For behavioral interdependence, we found a statistically significant correlation with intimacy (rs = 0.19, p = 0.011) but not with understanding (rs = 0.075, p = 0.33), offense (rs = −0.066, p = 0.39), and respect (rs = 0.029, p = 0.71). Affecting understanding has statistically significant correlated with understanding (rs = 0.26, p < 0.001), but not with offense (rs = −0.025, p = 0.75), intimacy (rs = 0.12, p = 0.1), and respect (rs = 0.01, p = 0.8). Finally, we discovered a statistically significant correlation between affective interdependence and understanding (rs = 0.22, p = 0.004) and intimacy (rs = 0.18, p = 0.019), but not for offense (rs = 0.05, p = 0.48) and respect (rs = 0.03, p = 0.7). The overview of all correlations is shown in Figure 8.

Skip 5DISCUSSION Section

5 DISCUSSION

In this section, we discuss in detail how participants perceived the understanding, offense, intimacy, and respect conveyed by different agents with varying politeness behaviors.

5.1 Politeness over Embodiment

Our findings counter the existing research on the role of embodiment in human-agent interactions [7, 40, 82]. Instead, our results align with the CASA paradigm [39, 48, 59] and anthropomorphic design principles [22], indicating that individuals tend to attribute social qualities to artificial entities, treating them as if they possess social attributes. In this specific scenario, the embodiment does not appear to wield significant influence on the persuasiveness and perceived politeness of the agents. Instead, the study underscores the paramount importance of politeness behaviors as the driving factor in persuading participants to join a group at the furthest side and shaping participants’ joining behaviors and perceptions of the agent. These results challenge the conventional emphasis on the physical form or representation of agents, suggesting that the strategies employed in communication, particularly those related to politeness, hold more sway over users’ behaviors and impressions. These findings have broader implications for the design and implementation of agents, as they underscore the central role of communication strategies in achieving successful human-agent interactions. Specifically, proposing behavior proves more effective in persuading participants to join the group of agents at its furthest side. This success may be attributed to proposing providing participants with more explicit cues to follow, leading to higher effectiveness in achieving the intended outcome. This observation is further supported by the greater understanding associated with proposing behavior compared to the other behaviors. However, while proposing is clearer and more convincing in guiding participants, it may also carry the risk of being perceived as more constraining and potentially offensive in contrast to indirect behavior. Consequently, designers should consider possible adverse reactions, such as an increased sense of losing face for users and the imposition of restrictions on their freedom of action, that may be linked to the utilization of proposing as a persuasive technique despite its clarity and effectiveness in guiding participants. Consistent with prior research [35, 90, 91, 92], our results show that the Proposing behavior, which is related to the positive politeness strategy, can prove highly effective in scenarios where there is a delicate balance between persuasion and ensuring a positive user experience. For instance, if a social robot designed to assist individuals with disabilities in a workplace setting [29], its primary objective is to persuade the user to adhere to specific workplace safety and accessibility protocols while simultaneously fostering a friendly and supportive relationship. In this scenario, employing the Proposing behavior can be advantageous. It allows the robot to effectively convey the importance of following safety guidelines and maintain a positive rapport with the user. This approach can lead to a stronger connection between the robot and the user, encouraging the user to embrace the guidelines voluntarily. The user is likelier to perceive the robot as a helpful and persuasive collaborator rather than feeling pressured or obligated to comply with the guidelines.

5.2 Propose and I will Join

Previous psychological research [31] has revealed that humans tend to avoid choices demanding extra effort when presented with similar options. These findings align with the study’s outcomes, as participants in the BSL and IND conditions, where they did not receive clear instructions about the joining side, mostly chose the less demanding approach by joining at the closest side. Conversely, they were more inclined to follow the agent’s request to join the farthest side when provided with clearer instructions, such as Proposing behavior. However, they might traverse the group’s center (i.e., o-space). The results showed that participants generally adhered to social norms, even if it meant expending additional effort by opting for an inconvenient route to join a group of agents. As shown in Table 3, consistent with previous findings in the field [35, 36, 90, 91, 92], in most instances (57%), participants prioritized adherence to social norms over the savings in effort implied by taking the unsocial route through the o-space of the group of agents. Additionally, Table 3 illustrates a noticeable growth in persuasion from the first to the third experiment block (29% to 40%)), aligning with prior research [35, 90, 91, 92]. However, participants’ adherence to social norms slightly increased from the first to the third block before declining slightly in the third block. This pattern aligns with the concept of the effort paradox [34], suggesting that investing effort can boost perceived value, motivating individuals to engage in tasks requiring greater exertion. Nevertheless, participants might have been tired in the final block, possibly contributing to their reluctance to choose the longer route despite still complying with the request to join at the farthest side.

5.3 Comfortable with Humans while Staying Close to Robots

Participants took less time to complete their walking actions when interacting with humans than the artificial agents, namely virtual characters and robots. This suggests that participants exhibited more efficient movement when joining groups involving human agents. The quicker completion of actions in the presence of humans might indicate a higher level of comfort or familiarity, as humans are more relatable and predictable in social interactions. The observed impact could also stem from the novelty effect associated with interactions involving humanoid robots and virtual characters. This suggests that in environments where these non-human agents are more prevalent, a similar impact might not be as pronounced or evident. Consequently, we could anticipate similar reactions towards both human and non-human agents in such settings. Furthermore, when considering politeness behaviors, participants exhibited shorter completion times for their walking actions in the presence of Indirect behavior, in contrast to the other two behavior types (Baseline and Proposing). Concerning the path length, participants traversed longer distances when encountering Proposing behavior than the other two behaviors (Baseline and Indirect). Notably, Proposing behavior was the most effective in persuading participants to opt for a longer route to reach the group at the furthest side. These collective findings suggest that the increased distance and time associated with Proposing behavior can be attributed to participants choosing a longer path to join the group at the furthest side via an unsocial or inconvenient route. When examining the Baseline behavior, participants required more time to process and assess the agent’s behavior and decide their preferred joining approach, often leading them to select the closest side. Moreover, participants’ final distance to the main and secondary agent was shorter when interacting with robots than the other two agent types (humans and virtual characters). This finding indicates that participants approach robots more closely during group joining interactions, which could be linked to participants’ perceptions of robots as being “cute” and “friendly”, as highlighted in the qualitative feedback. Finally, participants’ final distance to the main and secondary agents was shorter when encountering proposing and indirect behaviors than baseline behavior. This implies that participants may have perceived proposing and indirect behaviors as inviting or accommodating, encouraging them to approach secondary agents more closely.

5.4 Design Implications for Robots and Virtual Characters

The insights derived from our study yield valuable design implications specifically tailored for enhancing the behaviors of robots and virtual characters in human-agent interactions. Designers should prioritize integrating and refining politeness strategies within the behavioral repertoire of robots and virtual characters. Our study emphasizes that strategies related to politeness exert a more substantial influence on user behaviors and impressions compared to the embodiment of agents in persuading them to do some actions. These politeness strategies could be combined with existing models facilitating robots to join a group by following social norms, as demonstrated in prior works by Imayoshi et al. [32, 33], to create a more comprehensive framework for agent behavior. Such a framework would not only involve adherence to and adaptation within social spaces but also entail polite behavior inviting newcomers to join during human-robot interactions. Furthermore, the implementation of clear instructions, particularly exemplified by Proposing behavior derived from the positive politeness strategy, showcases its effectiveness in persuading users to exert additional effort for socially preferable actions. Designers should carefully balance persuasive techniques with user experience considerations, accounting for potential user perceptions of imposition or constraint. Additionally, recognizing the differences in user comfort levels and interaction efficiency between human and artificial agents, designers should aim to create behaviors that foster a higher level of user comfort and familiarity, especially with artificial agents. Improving the efficiency and clarity of interactions with robots and virtual characters may elevate the overall user experience. Our study reveals a proximity preference with robots, potentially stemming from perceptions of these agents as “cute” and “friendly”. Designers can leverage these qualities by investigating factors contributing to perceived friendliness in robot design. Incorporating such elements could positively influence user interactions and perceptions.

Skip 6LIMITATIONS AND FUTURE WORK Section

6 LIMITATIONS AND FUTURE WORK

Our findings should be considered under cultural variations in the interpretation of politeness. Different cultures may have varying definitions of politeness, which could impact how participants perceive and respond to the politeness strategies used in the study. For instance, in some cultures, walking through the center (or o-space) of a group may be viewed as less impolite than walking behind individuals when taking a route around the outside of the group. The influence of social context on participant perception is another area that could benefit from further exploration in future studies. Additionally, this study focused on a specific set of politeness strategies and embodiments, which may not cover the entire spectrum of possible behaviors. The specific formation of two agents in the study is another limitation that might restrict the generalizability of the results to scenarios with different spatial configurations. Future research could address these limitations by replicating the study with exploring diverse spatial arrangements and social distances. Also, the study examined short interactions, and the results may differ in longer-term scenarios. Future work could explore a broader range of politeness strategies and embodiments, and investigate long-term interactions on user responses. Furthermore, extending this research to real-world applications, such as health care [58], customer service [83], or education [5], could provide practical insights into human-agent interactions in various contexts. In addition, research can investigate the psychological mechanisms driving our social behavior, group dynamics, and movement patterns and work towards optimizing them for diverse contexts. In this study, the agents consistently maintained eye contact with participants throughout each trial. Exploring different levels of eye contact in this scenario could be beneficial [1], considering that research has indicated that varying levels of eye contact might exert contrasting effects in diverse situations [4].

Skip 7CONCLUSION Section

7 CONCLUSION

In this paper, we explored how polite behaviors (verbal and nonverbal) exhibited by three distinct embodiments (humans, robots, and virtual characters) influence individuals’ decisions to join a group of two agents in a controlled experiment We found that embodiment doesn’t strongly affect agent persuasiveness or perceived politeness during group joining, highlighting the influence of politeness behaviors. Direct and explicit politeness strategies (positive politeness) were notably successful in persuading participants to join at the furthest side (using proposing behavior). Moreover, participants tended to follow social norms by not crossing the group’s o-space while joining the furthest side of the group. Our study demonstrated that agent embodiment and politeness behaviors influenced participants’ movement patterns during group joining interactions. Humans led to quicker movement completion. Proposing behavior resulted in longer path lengths, and robots prompted participants to approach the main agent more closely. These findings contribute to our understanding of how agents and politeness strategies impact the social space aspects of human-agent interactions, which can inform the design of more effective and user-friendly AI systems and robots. Further research can delve deeper into the underlying psychological mechanisms driving these movement patterns and explore ways to optimize them in various contexts.

Footnotes

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

32 MB

Video Figure

This is a short video that summarizes the work in an attractive manner and explains the main points of the study very effectively.

mp4

220.5 MB

Video Figure Low Resolution

This is a short video that summarizes the work in an attractive manner and explains the main points of the study very effectively.

mp4

52.1 MB

References

  1. Julius Albiz, Olga Viberg, and Andrii Matviienko. 2023. Guiding Visual Attention on 2D Screens: Effects of Gaze Cues from Avatars and Humans. In Proceedings of the 2023 ACM Symposium on Spatial User Interaction (Sydney,NSW, Australia) (SUI ’23). Association for Computing Machinery, New York, NY, USA, 9 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Philipp Althaus, Hiroshi Ishiguro, Takayuki Kanda, Takahiro Miyashita, and Henrik Iskov Christensen. 2004. Navigation for human-robot interaction tasks. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004. IEEE, 1894–1900 Vol.2.Google ScholarGoogle ScholarCross RefCross Ref
  3. P. Althaus, H. Ishiguro, T. Kanda, T. Miyashita, and H. I. Christensen. 2004. Navigation for human-robot interaction tasks. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, Vol. 2. IEEE, 1894–1900.Google ScholarGoogle ScholarCross RefCross Ref
  4. Michael Argyle and Janet Dean. 1965. Eye-contact, distance and affiliation. Sociometry (1965), 289–304.Google ScholarGoogle Scholar
  5. Ruth Aylett, Marco Vala, Pedro Sequeira, and Ana Paiva. 2007. Fearnot!–an emergent narrative approach to virtual dramas for anti-bullying education. In Virtual Storytelling. Using Virtual Reality Technologies for Storytelling: 4th International Conference, ICVS 2007, Saint-Malo, France, December 5-7, 2007. Proceedings 4. Springer, 202–205.Google ScholarGoogle ScholarCross RefCross Ref
  6. Jeremy N Bailenson, Nick Yee, Dan Merget, and Ralph Schroeder. 2006. The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments 15, 4 (2006), 359–372.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Wilma A Bainbridge, Justin W Hart, Elizabeth S Kim, and Brian Scassellati. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics 3 (2011), 41–52.Google ScholarGoogle ScholarCross RefCross Ref
  8. Marc Becker, Dominik Mahr, and Gaby Odekerken-Schröder. 2023. Customer comfort during service robot interactions. Service Business 17, 1 (2023), 137–165.Google ScholarGoogle ScholarCross RefCross Ref
  9. Prabhat Kumar Bharti, Meith Navlakha, Mayank Agarwal, and Asif Ekbal. 2023. PolitePEER: does peer review hurt? A dataset to gauge politeness intensity in the peer reviews. Language Resources and Evaluation (2023), 1–23.Google ScholarGoogle Scholar
  10. Frank Biocca. 1997. The cyborg’s dilemma: Progressive embodiment in virtual environments. Journal of computer-mediated communication 3, 2 (1997), JCMC324.Google ScholarGoogle Scholar
  11. Frank Biocca, Chad Harms, and Judee K Burgoon. 2003. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators & virtual environments 12, 5 (2003), 456–480.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jim Blascovich, Jack Loomis, Andrew C Beall, Kimberly R Swinth, Crystal L Hoyt, and Jeremy N Bailenson. 2002. Immersive virtual environment technology as a methodological tool for social psychology. Psychological inquiry 13, 2 (2002), 103–124.Google ScholarGoogle Scholar
  13. Penelope Brown and Stephen C. Levinson. 1978. Universals in language usage: Politeness phenomena (3 ed.). Cambridge University Press, 56–311.Google ScholarGoogle Scholar
  14. A. Cafaro, B. Ravenet, M. Ochs, H. H. Vilhjálmsson, and C. Pelachaud. 2016. The effects of interpersonal attitude of a group of agents on user’s presence and proxemics behavior. ACM Transactions on Interactive Intelligent Systems (TiiS) 6, 2 (2016), 1–33.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Justine Cassell. 2001. Embodied conversational agents: representation and intelligence in user interfaces. AI magazine 22, 4 (2001), 67–67.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Gary Charness, Uri Gneezy, and Michael A Kuhn. 2012. Experimental methods: Between-subject and within-subject design. Journal of economic behavior & organization 81, 1 (2012), 1–8.Google ScholarGoogle ScholarCross RefCross Ref
  17. Filipa Correia, Samuel F Mascarenhas, Samuel Gomes, Patrícia Arriaga, Iolanda Leite, Rui Prada, Francisco S Melo, and Ana Paiva. 2019. Exploring prosociality in human-robot teams. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 143–151.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Cristani, L. Bazzani, G. Paggetti, A. Fossati, D. Tosato, A. Del Bue, G. Menegaz, and V. Murino. 2011. Social interaction discovery by statistical analysis of F-formations.. In BMVC, Vol. 2. 4.Google ScholarGoogle Scholar
  19. Kerstin Dautenhahn, Bernard Ogden, and Tom Quick. 2002. From embodied to socially embedded agents–implications for interaction-aware robots. Cognitive Systems Research 3, 3 (2002), 397–428.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Eric Deng, Bilge Mutlu, Maja J Mataric, 2019. Embodiment in socially interactive robots. Foundations and Trends® in Robotics 7, 4 (2019), 251–356.Google ScholarGoogle ScholarCross RefCross Ref
  21. Brian R Duffy. 2003. Anthropomorphism and the social robot. Robotics and autonomous systems 42, 3-4 (2003), 177–190.Google ScholarGoogle Scholar
  22. Nicholas Epley, Adam Waytz, and John T Cacioppo. 2007. On seeing human: a three-factor theory of anthropomorphism.Psychological review 114, 4 (2007).Google ScholarGoogle Scholar
  23. Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. 2003. A survey of socially interactive robots. Robotics and autonomous systems 42, 3-4 (2003), 143–166.Google ScholarGoogle Scholar
  24. Y. Gao, F. Yang, M. Frisk, D. Hernandez, C. Peters, and G. Castellano. 2018. Social behavior learning with realistic reward shaping. arXiv preprint arXiv:1810.06979 (2018).Google ScholarGoogle Scholar
  25. Maia Garau, Mel Slater, Vinoba Vinayagamoorthy, Andrea Brogni, Anthony Steed, and M Angela Sasse. 2003. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI conference on Human factors in computing systems. 529–536.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Erving Goffman. 1967. On Face Work. 5–45.Google ScholarGoogle Scholar
  27. E. T. Hall. 1966. The hidden dimension. Vol. 609. Garden City, NY: Doubleday.Google ScholarGoogle Scholar
  28. Dai Hasegawa, Justine Cassell, and Kenji Araki. 2010. The role of embodiment and perspective in direction-giving systems. In 2010 AAAI Fall Symposium Series.Google ScholarGoogle Scholar
  29. Marcel Heerink, Ben Krose, Vanessa Evers, and Bob Wielinga. 2009. Measuring acceptance of an assistive social robot: a suggested toolkit. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 528–533.Google ScholarGoogle ScholarCross RefCross Ref
  30. Yaxin Hu, Yuxiao Qu, Adam Maus, and Bilge Mutlu. 2022. Polite or Direct? Conversation Design of a Smart Display for Older Adults Based on Politeness Theory. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 307, 15 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. C. L. Hull. 1943. Principles of behavior. Vol. 422. Appleton-century-crofts New York.Google ScholarGoogle Scholar
  32. Akira Imayoshi, Nagisa Munekata, and Testuo Ono. 2013. Robots that can feel the mood: Contexet-aware behaviors in accordance with the activity of communications. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 143–144.Google ScholarGoogle ScholarCross RefCross Ref
  33. Akira Imayoshi, Hiroshi Yoshikawa, Nagisa Munekata, and Tetsuo Ono. 2013. Robots that Can Feel the Mood: Adaptive Interrupts in Conversation Using the Activity of Communications. iHAI2013, II-2-2 (2013).Google ScholarGoogle Scholar
  34. M. Inzlicht, A. Shenhav, and C. Y. Olivola. 2018. The effort paradox: Effort is both costly and valued. Trends in cognitive sciences 22, 4 (2018), 337–349.Google ScholarGoogle Scholar
  35. Alessandro Iop, Sahba Zojaji, and Christopher Peters. 2022. Don’t walk between us: adherence to social conventions when joining a small conversational group of agents. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents. 1–8.Google ScholarGoogle Scholar
  36. A. Kendon. 1990. Conducting interaction: Patterns of behavior in focused encounters. Vol. 7. Cambridge University Press.Google ScholarGoogle Scholar
  37. Gyoung Kim and Frank Biocca. 2018. Immersion in virtual reality can increase exercise motivation and physical performance. In Virtual, Augmented and Mixed Reality: Applications in Health, Cultural Heritage, and Industry: 10th International Conference, VAMR 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part II 10. Springer, 94–102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Dimosthenis Kontogiorgos, Andre Pereira, Olle Andersson, Marco Koivisto, Elena Gonzalez Rabal, Ville Vartiainen, and Joakim Gustafson. 2019. The effects of anthropomorphism and non-verbal social behaviour in virtual assistants. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. 133–140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Jong-Eun Roselyn Lee and Clifford I. Nass. 2010. Trust in computers: The computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In Trust and technology in a ubiquitous modern environment: Theoretical and methodological perspectives. IGI Global, 1–15.Google ScholarGoogle Scholar
  40. Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies 77 (2015), 23–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Rui Li, Marc van Almkerk, Sanne van Waveren, Elizabeth Carter, and Iolanda Leite. 2019. Comparing Human-Robot Proxemics Between Virtual Reality and the Real World. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 431–439.Google ScholarGoogle Scholar
  42. Joseph Edward McGrath. 1984. Groups: Interaction and performance. Vol. 14. Prentice-Hall Englewood Cliffs, NJ.Google ScholarGoogle Scholar
  43. Brian P. Meier, Simone Schnall, Norbert Schwarz, and John A. Bargh. 2012. Embodiment in Social Psychology. Topics in Cognitive Science 4, 4 (2012), 705–716.Google ScholarGoogle ScholarCross RefCross Ref
  44. Kshitij Mishra, Mauajama Firdaus, and Asif Ekbal. 2022. Please be polite: Towards building a politeness adaptive dialogue system for goal-oriented conversations. Neurocomputing 494 (2022), 242–254.Google ScholarGoogle ScholarCross RefCross Ref
  45. Kshitij Mishra, Mauajama Firdaus, and Asif Ekbal. 2023. Predicting Politeness Variations in Goal-Oriented Conversations. IEEE Transactions on Computational Social Systems 10, 3 (2023), 1095–1104.Google ScholarGoogle ScholarCross RefCross Ref
  46. Jonathan Mumm and Bilge Mutlu. 2011. Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction (Lausanne, Switzerland) (HRI ’11). Association for Computing Machinery, New York, NY, USA, 331–338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Clifford Nass, Youngme Moon, and Paul Carney. 1999. Are people polite to computers? Responses to computer-based interviewing systems 1. Journal of applied social psychology 29, 5 (1999), 1093–1109.Google ScholarGoogle ScholarCross RefCross Ref
  48. Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Margot M. E. Neggers, Raymond H. Cuijpers, Peter A. M. Ruijten, and Wijnand A. IJsselsteijn. 2022. Determining Shape and Size of Personal Space of a Human when Passed by a Robot. International Journal of Social Robotics 14, 2 (Mar 2022), 561–572.Google ScholarGoogle ScholarCross RefCross Ref
  50. Radoslaw Niewiadomski, Elisabetta Bevacqua, Maurizio Mancini, and Catherine Pelachaud. 2009. Greta: an interactive expressive ECA system. In AAMAS ’09: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, Vol. 2. International Foundation for Autonomous Agents and Multiagent Systems, 1399–1400. https://dl.acm.org/doi/abs/10.5555/1558109.1558314Google ScholarGoogle Scholar
  51. David Novick and Aaron E. Rodriguez. 2021. A Comparative Study of Conversational Proxemics for Virtual Agents. Lecture Notes in Computer Science, Vol. 12770. Springer International Publishing, Cham, 96–105.Google ScholarGoogle Scholar
  52. S. K. Pathi. 2018. Join the Group Formations using Social Cues in Social Robots. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1766–1767.Google ScholarGoogle Scholar
  53. C. Pedica and H. H. Vilhjálmsson. 2018. Study of Nine People in a Hallway: Some Simulation Challenges. In the 18th Int. Conf. on Intelligent Virtual Agents. ACM, 185–190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. André Pereira, Catharine Oertel, Leonor Fermoselle, Joseph Mendelson, and Joakim Gustafson. 2020. Effects of different interaction contexts when evaluating gaze models in HRI. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 131–139.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Christopher Peters, Chengjie Li, Fangkai Yang, Vanya Avramova, and Gabriel Skantze. 2018. Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (Stockholm, Sweden) (AAMAS ’18). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2247–2249.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Rolf Pfeifer and Christian Scheier. 2001. Understanding intelligence. MIT press.Google ScholarGoogle Scholar
  57. Priyanshu Priya, Mauajama Firdaus, and Asif Ekbal. 2023. A multi-task learning framework for politeness and emotion detection in dialogues for mental health counselling and legal aid. Expert Systems with Applications 224 (2023), 120025.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Simon Provoost, Ho Ming Lau, Jeroen Ruwaard, and Heleen Riper. 2017. Embodied conversational agents in clinical psychology: a scoping review. Journal of medical Internet research 19, 5 (2017), e151.Google ScholarGoogle ScholarCross RefCross Ref
  59. Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. Cambridge, UK 10 (1996).Google ScholarGoogle Scholar
  60. Laurel D Riek. 2012. Wizard of oz studies in hri: a systematic review and new reporting guidelines. Journal of Human-Robot Interaction 1, 1 (2012), 119–136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Jorge Rios-Martinez, Anne Spalanzani, and Christian Laugier. 2015. From proxemics theory to socially-aware navigation: A survey. International Journal of Social Robotics 7 (2015), 137–153.Google ScholarGoogle ScholarCross RefCross Ref
  62. Eileen Roesler, Dietrich Manzey, and Linda Onnasch. 2023. Embodiment matters in social hri research: Effectiveness of anthropomorphism on subjective and objective outcomes. ACM Transactions on Human-Robot Interaction 12, 1 (2023), 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Maha Salem, Micheline Ziadee, and Majd Sakr. 2013. Effects of politeness and interaction context on perception and experience of HRI. In Social Robotics: 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings 5. Springer, 531–541.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. SM Bhagya P Samarakoon, MA Viraj J Muthugala, and AG Buddhika P Jayasekara. 2022. A Review on Human–Robot Proxemics. Electronics 11, 16 (2022), 2490.Google ScholarGoogle ScholarCross RefCross Ref
  65. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 35–40.Google ScholarGoogle ScholarCross RefCross Ref
  66. F. Setti, O. Lanz, R. Ferrario, V. Murino, and M. Cristani. 2013. Multi-scale F-formation discovery for group detection. In 2013 IEEE International Conference on Image Processing. IEEE, 3547–3551.Google ScholarGoogle Scholar
  67. Francesco Setti, Chris Russell, Chiara Bassetti, and Marco Cristani. 2015. F-formation detection: Individuating free-standing conversational groups in images. PloS one 10, 5 (2015).Google ScholarGoogle Scholar
  68. Ameneh Shamekhi, Q Vera Liao, Dakuo Wang, Rachel KE Bellamy, and Thomas Erickson. 2018. Face Value? Exploring the effects of embodiment for a group facilitation agent. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Anthony Steed and Ralph Schroeder. 2015. Collaboration in immersive and non-immersive virtual environments. Immersed in media: Telepresence theory, measurement & technology (2015), 263–282.Google ScholarGoogle Scholar
  70. Kazunori Terada, Mitsuki Okazoe, and Jonathan Gratch. 2021. Effect of politeness strategies in dialogue on negotiation outcomes. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents. 195–202.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Sam Thellman, Annika Silvervarg, Agneta Gulz, and Tom Ziemke. 2016. Physical vs. virtual agent embodiment and effects on social interaction. In Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20–23, 2016, Proceedings 16. Springer, 412–415.Google ScholarGoogle ScholarCross RefCross Ref
  72. Linda Tickle-Degnen and Robert Rosenthal. 1990. The nature of rapport and its nonverbal correlates. Psychological inquiry 1, 4 (1990), 285–293.Google ScholarGoogle Scholar
  73. X. T. Truong and T. D. Ngo. 2018. “To approach humans?”: A unified framework for approaching pose prediction and socially aware robot navigation. IEEE Transactions on Cognitive and Developmental Systems 10, 3 (2018), 557–572.Google ScholarGoogle ScholarCross RefCross Ref
  74. M. Vázquez, A. Steinfeld, and S. E. Hudson. 2015. Parallel detection of conversational groups of free-standing people and tracking of their lower-body orientation. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 3010–3017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Anna-Lisa Vollmer, Katrin Solveig Lohan, Kerstin Fischer, Yukie Nagai, Karola Pitsch, Jannik Fritsch, Katharina J. Rohlfing, and Britta Wredek. 2009. People modify their tutoring behavior in robot-directed interaction for action learning. In 2009 IEEE 8th International Conference on Development and Learning. IEEE, Shanghai, China, 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme. 2018. Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Science Robotics 3, 21 (2018).Google ScholarGoogle Scholar
  77. Joshua Wainer, David J Feil-Seifer, Dylan A Shell, and Maja J Mataric. 2006. The role of physical embodiment in human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 117–122.Google ScholarGoogle ScholarCross RefCross Ref
  78. Julie Williamson, Jie Li, Vinoba Vinayagamoorthy, David A. Shamma, and Pablo Cesar. 2021. Proxemics and Social Interactions in an Instrumented Virtual Reality Workshop. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 253, 13 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Julie R. Williamson, Joseph O’Hagan, John Alexis Guerra-Gomez, John H Williamson, Pablo Cesar, and David A. Shamma. 2022. Digital Proxemics: Designing Social and Collaborative Interaction in Virtual Environments. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 423, 12 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. Association for Computing Machinery, New York, NY, USA, 143–146.Google ScholarGoogle Scholar
  81. Bian Wu, Xiaoxue Yu, and Xiaoqing Gu. 2020. Effectiveness of immersive virtual reality using head-mounted displays on learning performance: A meta-analysis. British Journal of Educational Technology 51, 6 (2020), 1991–2005.Google ScholarGoogle ScholarCross RefCross Ref
  82. Agnieszka Wykowska. 2021. Robots as mirrors of the human mind. Current Directions in Psychological Science 30, 1 (2021), 34–40.Google ScholarGoogle ScholarCross RefCross Ref
  83. Li Xiao and Vikas Kumar. 2021. Robotics for customer service: a useful complement or an ultimate substitute?Journal of Service Research 24, 1 (2021), 9–29.Google ScholarGoogle Scholar
  84. Fangkai Yang, Yuan Gao, Ruiyang Ma, Sahba Zojaji, Ginevra Castellano, and Christopher Peters. 2021. A dataset of human and robot approach behaviors into small free-standing conversational groups. PloS one 16, 2 (2021), e0247364.Google ScholarGoogle ScholarCross RefCross Ref
  85. F. Yang and C. Peters. 2019. App-LSTM: Data-driven Generation of Socially Acceptable Trajectories for Approaching Small Groups of Agents. In Proceedings of the 7th International Conference on Human-Agent Interaction. 144–152.Google ScholarGoogle Scholar
  86. F. Yang and C. Peters. 2019. AppGAN: Generative Adversarial Networks for Generating Robot Approach Behaviors into Small Groups of People. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1–8.Google ScholarGoogle Scholar
  87. F. Yang, W. Yin, T. Inamura, M. Björkman, and C. Peters. 2020. Group Behavior Recognition Using Attention-and Graph-Based Neural Networks. In Proceedings of the 24th European Conference on Artificial Intelligence - ECAI 2020.Google ScholarGoogle Scholar
  88. Sahba Zojaji, Adrian Benigno Latupeirissa, Iolanda Leite, Roberto Bresin, and Christopher Peters. 2023. Persuasive polite robots in free-standing conversational groups. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023). 1–8.Google ScholarGoogle ScholarCross RefCross Ref
  89. Sahba Zojaji, Andrii Matviienko, and Christopher Peters. 2024. Exploring the Influence of Co-Present and Remote Robots on Persuasiveness and Perception of Politeness. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’24 Companion). Association for Computing Machinery, New York, NY, USA, 5 pages.Google ScholarGoogle Scholar
  90. Sahba Zojaji, Christopher Peters, and Catherine Pelachaud. 2020. Influence of virtual agent politeness behaviors on how users join small conversational groups. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. ACM, 1–8. https://doi.org/10.1145/3383652.3423917Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Sahba Zojaji, Anthony Steed, and Christopher Peters. 2023. Impact of Immersiveness on Persuasiveness, Politeness, and Social Adherence in Human-Agent Interactions within Small Groups. In ICAT-EGVE 2023 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Jean-Marie Normand, Maki Sugimoto, and Veronica Sundstedt (Eds.). The Eurographics Association. https://doi.org/10.2312/egve.20231315Google ScholarGoogle ScholarCross RefCross Ref
  92. Sahba Zojaji, Adam Červeň, and Christopher Peters. 2023. Impact of Multimodal Communication on Persuasiveness and Perceived Politeness of Virtual Agents in Small Groups. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents. 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Join Me Here if You Will: Investigating Embodiment and Politeness Behaviors When Joining Small Groups of Humans, Robots, and Virtual Characters

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Article Metrics

            • Downloads (Last 12 months)237
            • Downloads (Last 6 weeks)237

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format