skip to main content
research-article
Open Access

Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction

Authors Info & Claims
Published:18 October 2021Publication History

Skip Abstract Section

Abstract

The application of anthropomorphic features to robots is generally considered beneficial for human-robot interaction (HRI). Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-Robot interaction as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust and visual attention allocation was examined. Participants interacted with a robot, which was either anthropomorphically or non-anthropomorphically designed. Unexpectedly, attribute-based trust measures revealed no beneficial effect of anthropomorphism but even a negative impact on the perceived reliability of the robot. Trust behavior was not significantly affected by an anthropomorphic robot design during faultless interactions, but showed a relatively steeper decrease after participants experienced a failure of the robot. With regard to attention allocation, the study clearly reveals a distracting effect of anthropomorphic robot design. The results emphasize that anthropomorphism might not be an appropriate feature in industrial HRI as it not only failed to reveal positive effects on trust, but distracted participants from relevant task areas which might be a significant drawback with regard to occupational safety in HRI.

Skip 1INTRODUCTION Section

1 INTRODUCTION

With the progressing advancements in technology over the past years, the workplace has changed substantially. Today we already see how expeditiously robots are moving into manufacturing and human environments, with a variety of applications ranging from elderly care to service in office environments and collaborative work in industrial line productions [11]. The fluency and success of these new forms of human-machine interaction crucially depend on human-robot trust [29]. Freedy et al. [26], for instance, showed that levels of trust are correlated to the amount of manual interventions overriding a robot's actions. Whereas overtrusting the robot led to an omission of actions with negative performance outcomes, distrusting the robot resulted in unnecessary actions. Other studies revealed that trust in an automated system was the driving force in relying on the system's advice, not the objective reliability [18]. When participants distrusted the system, the system's advice was not followed even though it was correct. Further, Martelaro et al. [45] found a mediating effect of trust on the relationship between a robot's depiction as being vulnerable and feelings of companionship in interaction with a social robot.

Whereas shaping factors of trust in human-robot interaction (HRI) have already been identified, the effective impact of these factors is often not very well understood. One such example is anthropomorphism in HRI. It is suggested that a human-like robot design promotes trust in the robot. But this positive effect seems to be limited to only a certain level of anthropomorphism (e.g. not true for the uncanny valley [59, 66, 67]) and certain application domains (e.g. social robotics [27]). Therefore, the current study aims at broadening the scope of research on the effective impact of anthropomorphism on trust by addressing an application domain that has not gained much attention yet: The industrial context.

Moreover, the study further addresses the effect of anthropomorphism on attention allocation in industrial HRI. The majority of studies so far have investigated how anthropomorphic robot design affects subjective measures like perceived intelligence, likability or acceptance of a robot [17, 30, 36, 72]. Variables that are more closely linked to coordination behavior and performance have been rare. As attention is crucial for a safe and efficient interaction, this variable is explicitly addressed in the current study.

1.1 Trust in HRI

Several definitions have been proposed for trust in human-machine interaction [29, 38, 48]. One definition, which is widely accepted and used was established by Lee & See [39] with regard to human-automation interaction. According to the two authors, trust is “the attitude that an agent will help achieve an individual's goal in a situation characterized by uncertainty and vulnerability” ([39], p.51). This definition might also be applied to HRI. Trust is described as an attitude in a task-related context. The two contextual components of trust are defined as the vulnerability of the trustor and the uncertainty of the situation.

In work-related HRI, the human is relying on the robot to fulfill a task that is important to the human co-worker, e.g. holding a heavy part of a car body for the human co-worker to ease a manual welding process. Additionally, to the vulnerability of the trustor in terms of task fulfilment, the example illustrates a second component of vulnerability in HRI: The physical proximity and the vulnerability of the human co-worker in terms of physical safety. Therefore, humans not only have to rely on the robot to adequately support them in the work process but also to not hurt them, especially when workplaces of humans and robots overlap. The definition's second contextual component is the uncertainty of the situation. In the common industrial HRI, uncertainty used to be relatively low as robots were pre-programmed and always fulfilled actions in exactly the same repetitive manner. However, with the introduction of so-called “cobots”, collaborative robots that are easily programmable and therefore flexibly deployable, and the emergence of more and more semi-autonomous mobile robots, the uncertainty in interaction with a robot in the industrial domain has increased.

The proposed trust definition also applies to social robots. In social HRI settings, the uncertainty is even stronger. In these scenarios, the robot interacts in typically unstructured environments and has to adapt to changing inputs. The task of a social robot is the interaction and communication itself, consequently the human is depending on the robot as the interactional counterpart [5, 24]. This represents the vulnerability component of trust.

In this respect, the proposed trust definition by Lee and See [39] is applicable to HRI. As trust is described as an attitude, it primarily focuses on cognitive processes like expectations and attributions that are central components of trust. In this aspect, however, the definition might be too narrow as robots differ from automation due to their embodiment as physical interaction partners. Salem et al. [60] suggest that this creates new risk dimensions and thus trust in robots may vary from trust in automated systems. Because of the embodiment, a robot can be touched and even physically react to human behavior. Therefore, interactions with a robot might resemble important aspects of human-human interaction in which affective processes represent a crucial dimension of trust, too [40]. In line with this, Tapus and colleagues [64], for example, found that empathetic robot expressions and speech are perceived as more trustworthy, and Paradeda et al. [52] showed that the highest levels of trust are gained when a robot starts with small talk and according facial expressions like it is the case in typical human-human interactions.

When addressing trust in interaction with a technical agent, trust is often measured several times to cope with the dynamic nature of this attitude [33]. Lewis et al. [41] differentiate the dynamics of trust into three phases. Trust formation describes the basic attitude at the beginning of an interaction with the trustee. In this situation, trust often has to be built upon the trustee's appearance, context information and prior experience with similar agents as no interaction experience has been established yet. With starting interaction, trust is adapted to the actual experiences. If these experiences violate the trustor's expectations, trust dissolution follows. A restoration phase describes the trust development if positive interactions succeed such a negative event [44].

Interestingly, trust restoration and adaptation do not always seem to be exclusively related to interaction experiences with the robot. Robot failures sometimes do not affect people's decisions of whether to follow the advice of a robot or not, but do affect subjective perceptions of the robot's trustworthiness [57, 60]. To accommodate this divergence, it is important to not only account for the dynamics of trust but also for the differences in trust attitude and trust behavior.

In summary, existing trust definitions highlight two components that determine if trust becomes a relevant construct in interaction with another agent: The trustor has to be vulnerable to the actions of the trustee, and there has to be a situational uncertainty. Both components apply to work-related HRI. Seeking for a better understanding of the impact of trust in HRI, we further need to address this construct in its dynamic complexity. This means addressing trust not only as an attitude but also considering the behavioral component, as well as trust development over time.

Regarding factors that influence trust, a comprehensive meta-analysis by Hancock et al. [29] has revealed three different categories of impact factors: characteristics of the human, the robot and the environment. The meta-analysis showed that the strongest effects on trust development are based on the robot's characteristics. This includes performance-based factors like the robot's reliability and failure rate. Attribute-based factors like the robot's level of anthropomorphism are especially important for the initial trust which sets the base for trust formation.

1.2 Anthropomorphism in HRI

Anthropomorphism is broadly defined as the human tendency to transfer human-like characteristics to non-human entities [17]. The implementation of anthropomorphic characteristics to robot design can be induced on different dimensions [51]. A human-like robot appearance is the most apparent anthropomorphic design feature. It is particularly effective in initial interactions as physical appearance establishes expectations and biases interaction [24]. Other ways to design a robot anthropomorphically are with regard to communication style (e.g. natural speech or gestures [56]), the robot's movement (e.g. using human-like trajectories for an industrial single-arm robot [37]) or context (e.g. naming a robot and describing its personality or hobbies [50]).

Many studies reveal a positive impact of anthropomorphism on the interaction between humans and robots: Anthropomorphism improves initial trust perceptions and improves acceptance of robots as team partners [17]. Moreover, it facilitates human-machine interaction, increases familiarity, and supports users’ perception of predictable and comprehensible robot behaviors [72]. Human-like robots are perceived as more intelligent [30] and sociable [36] and have been shown to be liked more [12]. More recent studies have demonstrated that humanoid robots are judged according to human norms more so compared to less anthropomorphic ones [42, 43]. Other studies have revealed that people empathize more with an anthropomorphic robot compared to a non-anthropomorphic one [15, 55].

As presented, a large body of research confirms positive effects of an anthropomorphic design, especially in social settings. However, research is needed to understand the effect of a robot's anthropomorphic features in work-related domains. Results of the few studies regarding this application context are mixed. Some studies report positive effects of anthropomorphism. For instance, Kuz and colleagues showed in a set of studies positive effects of anthropomorphic trajectories of an industrial single-arm robot as compared to functional trajectories [37, 46]. Human-like movements benefited the anticipation of target positions from the robot's trajectory, whereas this was not possible when movements were modelled in a typically linear style. More positive effects of anthropomorphism were reported in a hospital case study. Hospital staff was friendlier and even tolerated malfunctions of a mobile delivery robot more so when the robot had been given a human name compared to robots without names [14]. This illustrates that even very low levels of anthropomorphism (like calling a robot by a human name) might have the power to cause a more forgiving attitude towards robotic malfunction. In line with that, Ahmad et al. [3] found that an anthropomorphic robot design positively impacted participants’ trust in an error-prone robot. However, the anthropomorphic impact turned into the opposite with low error rates. In this case, anthropomorphism had a negative effect on trust. Last but not least, Bernotat and Eyssel [7] could not confirm any impact of anthropomorphism in work-related HRI. The study investigated judgments of an anthropomorphically designed versus a standard industrial (non-anthropomorphic) robot in smart homes. They did not find an effect of robot type on trust.

Furthermore, first studies have revealed a strong impact of anthropomorphism in HRI on attentional processes. For instance, Bae and Kim [4] demonstrated that anthropomorphic robot features like a face attract more visual attention compared to non-anthropomorphic designs. If the anthropomorphic design of a robot is functional, i.e. has a task-related purpose, the introduction of human-like features could ease the interaction. In line with this idea, Wiese et al. [34] showed that people engage in joint attention with robots following their gaze to target locations. Moreover, Moon and colleagues [47] provided empirical evidence that using human-like gaze cues during human-robot handovers improves the timing and perceived quality of the handover event. However, if the anthropomorphic design is not instrumental, these features could be detrimental in terms of a distracting potential. In this sense, anthropomorphic robot features that are non-instrumental could encourage people to engage in joint attention, which in this case, would be meaningless and in consequence could unsettle and distract the human counterpart. Because anthropomorphic designs without any task-relation are increasingly implemented in an industrial HRI context (e.g. Sawyer and Baxter from HAHN Group/Rethink Robotics, workerbot from pi4_robotics), possible negative consequences of such robot design need to be addressed in research. The assumed negative effects might be especially relevant for an industrial setting where human operators work in multi-task environments. In these settings, the human's attention should be focused on task-related areas. A non-instrumental anthropomorphism could therefore distract human operators from their primary task fulfillment.

In sum, several studies have shown a positive effect of an anthropomorphic robot design on trust in HRI. However, as most of these studies are based in social settings, it remains unclear if positive effects are to be expected in an industrial setting, too. Only few studies have focused on a work-related HRI so far. Whereas an anthropomorphic robot movement seems to support users’ perception of predictable robot behaviour [37, 46], a positive impact of anthropomorphism on trust in industrial HRI could not be clearly confirmed as results are mixed [3, 7]. However, it seems that an anthropomorphic design could lead to a more forgiving attitude when a robot is error-prone [14]. Last but not least, results from studies on attention allocation indicate possible distracting effects as an inadvertent consequence of non-instrumental anthropomorphic robot design [4].

1.3 Hypotheses

Based on the state of research regarding trust and anthropomorphism in HRI, we hypothesized that interacting with an anthropomorphic robot should lead to higher trust levels compared to a robot without any anthropomorphic features. Moreover, we assumed that an anthropomorphic robot design reveals a more forgiving attitude towards the robot, resulting in a less pronounced trust decline after the experience of a robot failure.

Furthermore, we expected that an anthropomorphic design should lead to a shift in visual attention towards the anthropomorphic features (abstract face), irrespective of their task relevance. However, as the anthropomorphic features had no task relevance, this effect was expected to decrease with increasing interaction experience.

Skip 2MATERIALS AND METHODS Section

2 MATERIALS AND METHODS

To ensure transparency and in compliance with good research practice, the study was submitted to and approved by the ethics committee of the Humboldt-Universität zu Berlin. Prior to conducting the experiment, we preregistered the study at the Open Science Framework where the raw data of the experiment is available [31].

2.1 Participants

The number of participants was defined based on an a priori power analysis (GPower 3.1, for details see [21]). Accordingly, a total of 40 students (25 female, 15 male) were recruited via the official participant database PESA of the Institute of Psychology, Humboldt-Universität zu Berlin, and by distributing posters at public notice boards at universities in Berlin. Additionally, the experiment was advertised in student groups on Facebook. The majority of participants were psychology students (n = 35), the remaining five participants studied human medicine, human factors, mathematics, politics and communication, and technology management, respectively. Participants were ranging in age from 18 to 35 years (M = 24.47, SD = 4.34). All of them spoke German as a native language or at an equivalent level. People who wore glasses and/or had an impairment of color vision could not participate because of eye-tracking-restrictions. Wearing soft or hard contact lenses was, however, permitted. None of the participants had previously interacted with the robot used in this study, but four stated to have prior experience with robots, either gained in an experiment with a non-industrial (humanoid) robot at the engineering psychology lab at the Humboldt-Universität zu Berlin (n = 3) or in a work-related industrial setting (n = 1).

Participants signed consent forms at the beginning of the experiment and received course credit as compensation at the end of the experiment.

2.2 Apparatus and Task

The laboratory was arranged as an assembly workspace with a steel rack containing storage boxes (Figure 1, Figure 2). The industrial robot was positioned in front of the rack facing the human workstation which was set on a high table. The industrial robot used in this experiment was a Sawyer robot from Rethink Robotics equipped with one arm with 7 degrees of freedom and a range of 1.26 meters (Figure 1). Participants’ visual attention allocation was assessed with a monocular mobile eye tracker from Pupil Labs [35]. The Pupil Labs glasses were equipped with two cameras: one world camera to record the participant's view (1,920 × 1,080 pixels, 100° fisheye field of view, 60 Hz sampling frequency on a subset of 1,280 × 720 pixels) and one eye-camera for the right eye (1,920 × 1,080 pixels, 120 Hz sampling frequency on a subset of 320 × 280 pixels). Eye-camera adjustment, calibration, recording of the eye movements, and the definition of four areas of interest (AOIs) were carried out with help of the open source software Pupil Capture (release 1.10, January 2019 [35]). For the post video analysis, we used the open source software Pupil Player (release 1.10, January 2019 [35]).

Fig. 1.

Fig. 1. Left part: Experimental setting with 1. handover area, 2. human workspace, and 3. assembly area; Right part: Sawyer Cobot from Rethink Robotics with both appearance conditions.

Fig. 2.

Fig. 2. Experimental setup with four specified AOIs for measuring visual attention allocation: (A) shelf with boxes, (B) assembly area, (C) handover area, and (D) head-mounted display of the robot.

For the main task of the human-robot collaboration, the robot was grasping boxes out of the steel rack and handing them over to the participant. The boxes were first pretentiously scanned by the robot (depicted by a flashlight) to simulate an initial check of the quality (shape, color, size) and quantity of all components inside the box. Afterwards, the robot handed the box to the human coworker. The required movement sequences were programmed by using the software INTERA and included varying movements in the following chronology. First, the robot moved its gripper over a box in the steel rack and started a flashlight at the gripper to pretend a visual quality check. Next, the robot arm moved to a position that enabled it to grab the box. Equipped with the box, the arm moved towards the handover area and waited two seconds in this position before opening the gripper. Subsequently, it moved back to the initial position and waited for 15 seconds before starting the next loop. Except for the task relevant movements, no other interaction with the robot was possible. The robot's functions and movement patterns were the same in all conditions. However, every single handover was performed differently by the robot, depending on the initial position of the box in the rack. Therefore, the movements were not predictable by the participant and represented the uncertainty aspect of the situation. This was a prerequisite for trust being a relevant construct in the interaction of human and robot.

The boxes delivered by the robot initiated the task of the participant. First, participants had to take the box from the robot. Second, they had to assemble the components inside the box in a prescribed style. Components were LEGO bricks which simulated parts of a circuit board. Third, participants controlled the quality of the work by comparing the correct location and color of the components to a target circuit board. Subsequently, participants put the final product back into the box and transferred it to the experimenter.

Without the robot handing over the box, participants could not fulfil their task; they were depending on the robot to support them. This represented the vulnerability component of trust in our experimental setup.

Every interaction sequence was recorded through a birds-eye video camera by Logitech Capture (version C922). The camera provides a progressive image transmission of 1080 pixels. It was fixed at the ceiling of the laboratory in a distance of 3 meters central above the area where the handover between the participant and the robot took place.

2.3 Design

The experimental study used a mixed design with anthropomorphism as between-subject factor and interaction experience as within-subject factor. Anthropomorphism was implemented through the robot's physical appearance. This dimension is the most prominent with regard to first impression and is known to shape expectations and interaction behavior [20]. In the anthropomorphic condition, the robot's display showed an abstract face consisting of two eyes and according eyebrows. This operationalization of anthropomorphic appearance was chosen because the face is a central informational component in human-human interaction and is supposed to give clues about the mental state of the interaction partner [28, 70]. Therefore, introducing a robotic face should already be sufficient to change the perceived level of anthropomorphism. The face was dynamic as it changed the gaze direction and blinked from time to time. However, the dynamics were not meaningful as they were not related to the robot's actions. In the non-anthropomorphic condition, the robot's display just showed the Rethink Robotics logo (Figure 1).

In order to differentiate between the pure impact of anthropomorphism and the combined impact of anthropomorphism and experience, we included a second independent variable: interaction experience (within-subject). All participants worked on the collaborative task for a total of four blocks. Every block consisted of six boxes that were handed over by the robot. Whereas the first three blocks represented an increasing positive interaction experience, the last block incorporated a single negative interaction experience. After three successful handovers, the robot dropped a box before handing it over to the participant. Due to the robot's failure, the participant could not start to assemble the circuit board but had to wait for the robot to hand over the next box. After the failure experience, another three successful handovers followed.

2.4 Dependent Measures

2.4.1 Control Variables and Manipulation Check.

To prevent confounding effects of participants’ attitudes towards technology and robots in particular, we used the Negative Attitudes towards Robots Scale (NARS, Cronbach's α = .80 [63]), the subscale “Propensity to trust technology” of the Complacency as Potential questionnaire (Cronbach's α = .63 [22]), and the Affinity for Technology Interaction scale (ATI [25], Cronbach's α ranged in five different validation samples between .83–.92). Answers for the first two questionnaires were provided on a five-point Likert scale, the ATI scale on a six-point Likert scale. To analyze data, we computed the scale means. Moreover, we assessed participants’ propensity to anthropomorphize with a German version of the Individual Differences in Anthropomorphism Questionnaire by Waytz et al. (IDAQ; Cronbach's α between .82–.90 [68]). The German version by Eyssel and Pfundmair [19] deviates from the original version as it uses a seven-point Likert scale. The scores for the single items were accumulated. Therefore, possible results ranged between 15 and 105.

To check if our manipulation regarding anthropomorphism was successful, we asked participants to complete the Godspeed questionnaire consisting of five subscales [6]: Perceived Anthropomorphism (Cronbach's α = .87), Animacy (Cronbach's α = .70), Likeability (Cronbach's α = .85), Perceived Intelligence (Cronbach's α = .76), and Perceived Safety of the robot (Cronbach's α = .91). Additionally, we included the subscale Perceived Humanness of the revised Godspeed questionnaire by Ho and MacDorman (Cronbach's α = .84, [32]). We did not include the other two subscales Eeriness and Attractiveness as these are not necessarily related to anthropomorphism. All scales were presented in the form of semantic differentials with rating opportunities from one to five.

2.4.2 Trust.

As trust was one of our key variables, we chose a multifaceted assessment approach. Trust as an attitude was measured using subjective variables, trust behavior was assessed with objective measures. The subjective trust assessment included a questionnaire specifically developed to measure key factors impacting trust in an industrial setting of human-robot-collaboration [13]. It differentiates three subscales addressing the robot's motion and pick-up speed (Cronbach's α = .61), the safe co-operation (Cronbach's α = .80), and the robot and gripper reliability (Cronbach's α = .71) with a total of ten items that are answered on a five-point Likert scale. The overall score is assessed via the sum score of all items. Thus, the overall score can range from a minimum score of ten to a maximum score of 50. With regard to the trust definition [39] this questionnaire addresses robot attributes that are relevant for the cognitive component of trust development (trust proposition). However, the questionnaire does not assess the actual trust level while interacting with a robot.

Therefore, we asked participants to additionally rate two single items. Participants had to rate the perceived robot's overall reliability from 0% to 100% as the perceived reliability of a robot is known to be a major source for trust formation [29]. Although reliability is already a subscale of the trust questionnaire, this strongly focuses on the gripper reliability, not the robot's overall reliability. With the additional single question, we wanted to ensure to have a rating of the perceived overall robot reliability. Moreover, we asked participants to rate their overall trust from 0% to 100% on a single item to assess the actual trust level.

The objective trust variable (trust behavior) was based on the bird's eye video recording and measured the time participants had both hands in the handover area before the robot gave the box to the participant. The handover area was fixed and framed on the table with red tape (Figure 1, area 1). Time measurement started when both hands of a participant were in the handover area and time was stopped when the actual handover of the box from the robot to the human began. The video recordings were manually screened using a software that was specifically programmed for this purpose and time was measured using the according time stamps of the video. Differences between the time stamps were automatically computed and represented the duration in which both hands were in the handover area. Shorter times were interpreted as higher levels of trust as this indicates that participants expected the robot to release the box as anticipated. Longer times were accordingly associated with lower trust. This was the case when participants had their hands already under or at the box when the robot was still moving to act as a backup in case the robot would drop the box too early. The mean handover time was calculated per block for the positive interactions (block 1 to block 3). For the comparison of pre- and post-failure trust we calculated the mean handover time based on the three interactions directly before and after the failure, respectively (Figure 3).

Fig. 3.

Fig. 3. Experimental procedure depicting measurement points for the control variables, the manipulation check, and the subjective trust measures prior to and between experimental blocks. Blocks 1 to 4 are shown with six correct handovers each (grey quadrats) and the respective measures. The fourth block contains an additional faulty handover (red quadrat).

2.4.3 Visual Attention Allocation.

Visual attention allocation was measured by means of eye-tracking data. First, we assessed the number of fixations for different predefined areas of interest (AOI). This measure reflects the importance of the predefined areas for the subject. More important areas will be fixated more frequently [53]. For this purpose, four different AOIs (specified by markers) were defined before the experiment started. These AOIs corresponded to parts of the experimental setup and task that had to be paid attention to for a secure and efficient HRI. These were a) the shelf including the boxes, b) the assembly area, c) the handover area, and d) the robot's head-mounted display showing either a face (anthropomorphic condition) or the Rethink Robotics logo (non-anthropomorphic condition; Figure 2).

Second, we assessed the mean fixation time per AOI (in milliseconds, ms). This variable should give additional insight as an increased fixation time indicates either difficulties in understanding a visual input or that the object is more engaging in some way [53].

Fixations were defined by a minimum duration of 100 ms and a maximum duration of 400 ms and a dispersion in this time of 1.5° [8, 49, 61]. Both variables were calculated for the first three blocks only as we did not have hypotheses regarding the impact of a robot failure on visual attention allocation (Figure 3).

2.5 Procedure

The experimental procedure is depicted in Figure 3. The study took place at the engineering psychology lab at the Humboldt-Universität zu Berlin. All subjects were randomly assigned to one of the two between-subject conditions. Because of practical reasons, the assignment changed every five participants, so that the current robot set-up could be run several times before reconfiguring the system (different programs for anthropomorphic and non-anthropomorphic set-up). Participants received corresponding written instructions for the task and task setting. They were asked to imagine being a factory worker in a company producing electronic devices. The instructions described a collaborative task in which participants were working together with a robot. The robot's task was to hand over boxes from a steel rack to the participants. The boxes contained assembly material for a LEGO circuit board. Participants’ task was to assemble the circuit boards and to do subsequent quality checks. No further information was given about the robot. After filling out the informed consent and sociodemographic questions, participants completed questionnaires assessing the control variables: the ATI, the propensity to trust technology, the IDAQ, and NARS. Afterwards, participants put on the mobile eye-tracking device and started the first calibration (recalibrations were conducted between blocks if needed). Subsequently, they were exposed to the robot and the display was either showing a face or the company's logo, according to the respective condition. For the manipulation check, the appearance of the static robot was then assessed via anthropomorphism questionnaires (Godspeed and subscale perceived humanness from the Godspeed revised). Before the start of the actual task, the trust questionnaires were filled out (initial measure, t0). After completing the first block, subjective trust was measured again (pre-failure measure, after six positive interactions, t1). Blocks 2 and 3 were comparable to the first block with six successful interactions between robot and human per block. To measure the impact of an increasing positive experience in interaction with the robot, another subjective trust assessment was conducted after the third block (pre-failure measure, after 18 positive interactions, t2). Block 4 contained the robot's failure. After this last block, trust was measured again (post-failure measure, t3). In total, participants experienced 25 handovers with the robot. The entire experimental procedure lasted approximately 1.5 hours.

2.6 Data Analysis

For the control variables and the manipulation check regarding anthropomorphism, results are based on two-sided t-tests for independent samples comparing the two between-subject groups. We expected no differences between groups. Therefore, it was important to control for a type II error, which is indirectly considered by choosing a relatively liberal alpha error of α = .2 [9].

We assessed different aspects of trust attitude with three different measures. Therefore, we first conducted correlational tests with the respective variables to analyze the interdependencies.

To test our hypotheses, we used two different analysis designs. To test for the impact of anthropomorphism on trust and visual attention allocation during fault-free interactions, we followed a 2 (anthropomorphism) × 3 (interaction experience) design. Data was then analyzed with mixed ANOVAs for the single variables with anthropomorphism as between-subject and interaction experience as within-subject factor, including the first three experimental blocks (Figure 4, blue rectangle). If assumptions of variance homogeneity or normal distribution were violated, the non-parametric Mann-Whitney-U-Test was applied instead of an ANOVA.

Fig. 4.

Fig. 4. Experimental mixed design with anthropomorphism as between-subject and interaction experience as within-subject variable. The blue and orange rectangle represent the two statistical analysis designs.

To test the impact of anthropomorphism and failure experience on trust, we used a 2 (anthropomorphism) × 2 (interaction experience) design. Interaction experience only included trust data directly assessed before experiencing a robot failure and after failure experience. This involved subjective trust measures at t2 and t3, and objective trust measures recorded in block 4 (Figure 4, orange rectangle). Because the trust measures at t2 were already included in the first statistical analysis design, we applied Bonferroni corrections to account for the alpha error accumulation of multiple tests in the second analysis. If assumptions of variance homogeneity or normal distribution were violated, the non-parametric Mann-Whitney-U-Test was applied instead of an ANOVA.

For pairwise comparisons of the independent variable interaction experience, p-values were Bonferroni corrected. In the case of violations of Mauchly's sphericity test, the Greenhouse-Geisser adjustment was used for corrections.

Skip 3RESULTS Section

3 RESULTS

3.1 Attitudes Towards Technology

We found no significant differences between the anthropomorphic and the non-anthropomorphic groups neither with regard to the NARS (t(38) = 1.03, p = .309), participants’ propensity to trust technology (U = 198.5, p = .967), nor their affinity for technology interaction (ATI; t(38) = −0.57, p = .566) or the propensity to anthropomorphize (IDAQ; t(38) = −0.38, p = .699). All participants scored relatively low on the NARS (M = 2.69, SD = 0.60), and had a moderately positive attitude towards technology (Propensity to Trust Technology: MpropTrust = 3.72, SDpropTrust = 0.49; ATI: MaffinityTech = 3.25, SDaffinityTech = 0.99). Regarding participants’ tendency to anthropomorphize, data showed a relatively big variance with values ranging from 20 to 77 (M = 45.7, SD = 14.07).

3.2 Manipulation Check

Surprisingly, the overall scores of the Godspeed questionnaire revealed comparable ratings in the anthropomorphic (M = 2.97, SD = 0.46) and non-anthropomorphic condition (M = 2.80, SD = 0.32), t(38) = 1.34, p = .187. Data of the subscale anthropomorphism showed that both robots were perceived as being not anthropomorphic (Manthro = 1.66, SDanthro = 0.44; Mnon-anthro = 1.67, SDnon-anthro = 0.49; t(38) = −0.06, p = .947). This was supported by ratings of the human-likeness scale of the revised Godspeed questionnaire [27]. Again, ratings were rather low and did not differ between the two anthropomorphic conditions (Manthro = 2.12, SDanthro = 0.70; Mnon-anthro = 1.76, SDnon-anthro = 0.63; t(38) = 1.69, p = .099). Independent of conditions, the robot was liked (M = 3.43, SD = 0.71; U = 130.5, p = .057), perceived as being intelligent (M = 3.37, SD = 0.78; t(38) = −0.06, p = .384) and safe (M = 3.93, SD = 0.75; t(38) = −.06, p = .683), but animacy ratings were rather low (M = 2.02, SD = 0.73; t(38) = −.06, p = .184).

3.3 Interdependencies of Trust Attitude Measures

We assessed trust attitude with three different measures: trust propositions with Charalambous et al.’s [13] trust questionnaire (trust prop.), a single item asking for the perceived reliability of the robot (rel), and a single trust item (trust). For correlational analyses the variables were averaged over t0, t1, t2 and t3. Results revealed significant positive relationships between the three trust attitude measures (rtrust prop – rel = .564, p < .001; rtrust prop – trust = .486, p = .001; rrel – trust = .524, p = .001). The rather medium correlations indicate that the three measures were related, but apparently, and as expected, assessed different aspects of trust attitude.

3.4 Trust Development with Positive Interaction Experience

We first compared participants’ ratings given in Charalambous et al.’s [13] trust questionnaire. Overall ratings only revealed a significant main effect of experience, F(2, 76) = 25.49, p < .001, ηp2 = .40. Post hoc tests using Bonferroni correction for multiple comparisons revealed that participants’ trust was significantly lower prior to interaction (Mt0 = 39.15, SDt0 = 5.13) than after the experience of faultless interaction at t1 (M = 43.20, SD = 5.44; p < .001) and t2 (M = 44.22, SD = 5.17; p < .001), but not between the latter ones (p = .313). Anthropomorphic design did not have a significant effect on trust formation (F < 1). This pattern was mirrored in the results for the subscales robot's speed, safe co-operation, and gripper reliability that again only revealed significant main effects of experience with most pronounced differences between ratings prior and after interaction (speed: F(1.3, 50.54) = 19.68, p < .001, ηp2 = .34; safe co-operation: F(2, 76) = 11.47, p < .001, ηp2 = .23; gripper reliability: F(2, 76) = 9.66, p < .001, ηp2 = .20).

For the single item reliability and trust ratings we decided to substitute data of one participant in the non-anthropomorphic condition for all analyses based on these variables. Box plot outlier analyses showed that most ratings of this person deviated substantially from group means (deviations for single item trust: t0 more than 3 box-lengths, t1 and t3 more than 1.5 box-lengths from the edge of the box; for single item reliability: t0, t1 and t3 more than 1.5 box-lengths from the edge of the box). We substituted the according data with the mean minus two standard deviations [23]. We also substituted one data point from each of two participants in the anthropomorphic condition with the group mean. One participant rated subjective trust at t0 with 0, the other one rated the robot's reliability at t1 with 0. We assume that this happened accidentally as all other reliability and trust ratings of both participants were substantially higher (e.g. according t0reliability = 84%; according t1trust = 71%).

Single item reliability ratings are shown in Figure 5. Participants perceived the anthropomorphic robot to be significantly less reliable (Manthro = 81.15%, SDanthro = 16.93%; Mnon-anthro = 88.87%, SDnon-anthro = 16.93%; F(1, 38) = 4.15, p = .048, ηp2 = .09). No significant results were found for experience (F(2, 76) = 2.79, p = .067, ηp2 = .06), nor was there an interaction effect of anthropomorphism and experience (F < 1).

Fig. 5.

Fig. 5. Single item reliability and trust ratings before interacting with the robot (t0) and with ongoing positive experience (t1, t2). Depicted are means and standard errors.

On a descriptive level, results for the single item trust (Figure 5) pointed into the same direction as the reliability ratings with lower trust ratings for the anthropomorphic robot. But this pattern was not statistically supported, F(1, 38) = 3.69, p = .062, ηp2 = .08. With ongoing experience, trust in both conditions increased significantly, F(1.37, 52.16) = 10.37, p = .001, ηp2 = .21. Post hoc tests using Bonferroni corrections showed that the increase was most apparent between ratings prior to and after interactions (t0 vs. t1: p = .007; t0 vs. t2: p = .004).

In sum, findings for trust attitude did not show the expected positive effect of an anthropomorphic robot design. Whereas no positive impact could be found neither for trust propositions nor the actual trust level, the perceived reliability even revealed a negative impact of anthropomorphism.

Trust behavior was defined as the mean handover time between robot and participant. Longer times were interpreted as lower trust. In both conditions data of one participant could not be analyzed because of technical failures in the video recording. Remaining data showed a beneficial effect of interaction experience. After a prolonged time of interaction with the robot, handover times in both conditions declined (MB1 = 2.71s, SDB1 = 0.67s; MB2 = 2.74s, SDB2 = 0.52s; MB3 = 2.54s, SDB3 = 0.46s). This was statistically supported by a significant main effect of experience, F(2, 72) = 4.93, p = .010, ηp2 = .12. Post hoc tests using Bonferroni correction for multiple comparisons located the significant decrease in handover time between the second and third block (p = .005). Whether the robot had a face or not, did not affect handover times, nor was there an interaction effect between anthropomorphism and experience (F < 1).

3.5 Visual Attention Allocation

Data of 39 participants were included in analyses on visual attention allocation. One participant's data were excluded due to technical problems with the recording.

Results regarding the number of fixations for all AOIs are depicted in Figure 6. For the AOIs shelf and participants’ assembly area, results showed an effect of time on task (shelf: F(1.59, 59.16) = 128.87, p < .001, ηp2 = .77; assembly area: F(1.59, 59.01) = 31.02, p < .001, ηp2 = .45). For the AOI shelf this effect was due to a significant decrease of visual attention to this area in block 3 compared to block 1 (p < .001) and block 2 (p < .001) as post hoc tests using Bonferroni correction for multiple comparisons revealed. Regarding participants’ attention to their assembly area, data showed a continuous decrease of attention to this AOI from the first to the last block with significant differences between all blocks (b1 vs. b2, b1 vs. b3: p < .001, b 2 vs. b3: p = .006). No impact of anthropomorphism was found, nor interaction effects between anthropomorphism and experience (shelf & assembly area: F < 1; see Figure 6).

Fig. 6.

Fig. 6. Number of fixations for the AOIs shelf (upper left), assembly (upper right), handover (lower left), and robot display (lower right). Depicted are means per block with according standard errors.

However, this changed substantially when looking at the other two AOIs. Participants spent less attention to the handover area with ongoing interaction experience, F(2,74) = 8.50, p < .001, ηp2 = .18 (b1 vs. b2, p = .002, b1 vs. b3, p = .007, no difference between b2 and b3, p = 1.0). But there was also a large difference in the overall amount of attention located to this area. Participants of the anthropomorphic condition looked at this area significantly less often (M = 47.78, SD = 8.44) than participants of the non-anthropomorphic condition (M = 56.26, SD = 13.34), F(1, 37) = 5.55, p = .024, ηp2 = .13.

This was mirrored for the AOI representing the robot's display showing either a face or the logo of the robot's company. In the anthropomorphic face condition participants looked at the display significantly more often (M = 25.85, SD = 18.07) than participants in the non-anthropomorphic condition looked at the display without a face (M = 8.45, SD = 7.45; F(1, 37) = 15.75, p < .001, ηp2 = .29). This effect did not change across blocks (F(1.47, 54.53) = 1.38, p = .256, ηp2 = .03).

Results for the mean fixation time for the AOI shelf revealed again an effect of experience, F(2, 74) = 7.92, p = .001, ηp2 = .17. The mean fixation time decreased with ongoing time on task (Mb1 = 255.13ms, SDb1 = 29.64ms; Mb2 = 254.10ms, SDb2 = 25.66ms; Mb3 = 243.07ms, SDb3 = 28.35ms). Post-hoc tests using Bonferroni correction showed a significant difference for fixation duration between block 1 and block 3 (p = .009). No other effects were found (F < 1).

Participants showed a stable mean fixation duration with regards to the assembly area. On average, a fixation to this area lasted for 195.20ms (SD = 24.72ms). No impact of anthropomorphism, nor interaction experience was found (main effect anthropomorphism: F(1, 37) = 2.45, p = .126, ηp2 = .06; main effect interaction experience: F < 1; interaction effect: F < 1).

Regarding the handover AOI, there was an effect of experience, F(2, 74) = 3.46, p = .036, ηp2 = .08. With ongoing time on task, fixation times decreased (Mb1 = 188.82ms, SDb1 = 29.84ms; Mb2 = 177.56ms, SDb2 = 29.89ms; Mb3 = 182.32ms, SDb3 = 27.86ms). Post-hoc tests with Bonferroni correction for multiple comparisons showed that this reduction was most prominent from block 1 to block 2 (p = .020). Anthropomorphism did not have a significant impact (F(1, 37) = 1.78, p = .190, ηp2 = .04), nor was there an interaction effect (F(2, 74) = 1.75, p = .180, ηp2 = .04).

The mean fixation time for the AOI face could not be calculated for all participants because some never looked at the robot's display and therefore produced missing data for this measure. Therefore, the analysis was conducted with n = 14 participants in the non-anthropomorphic and n = 17 participants in the anthropomorphic condition. Fixation duration showed no differences between the two anthropomorphic conditions F(1, 29) = 3.21, p = .083, ηp2 = .10, nor an impact of ongoing positive interaction experience (F < 1) or an interaction effect (F < 1).

3.6 Trust Dissolution with Negative Interaction Experience

The failure experience did have a significant impact on participants’ trust attitude and trust behavior. For trust attitude this was revealed by all measures. Overall ratings of Charalambous et al.’s [13] trust questionnaire showed a significant effect of failure experience (Mprefailure = 44.22, SDprefailure = 5.17; Mpostfailure = 40.15, SDpostfailure = 6.06; F(1, 38) = 35.22, p < .001, ηp2 = .48). Anthropomorphism had no impact (F < 1) nor was there an interaction effect (F < 1). This pattern of results was also found for the subscales, with the strongest effect of failure experience on the reliability subscale (speed: F(1, 38) = 6.03, p = .019, ηp2 = .13; safe co-operation: F(1, 38) = 4.24, p = .046, ηp2 = .10; gripper reliability: F(1, 38) = 40.75, p < .001, ηp2 = .51). No other effects were significant (all F < 1).

Results for the single items were in line (see Figure 7 for the single items reliability and trust). For the perceived robot reliability as well as for the single trust assessment there was a significant effect of the robot's failure (reliability: F(1, 38) = 48.57, p < .001, ηp2 = .56; trust: F(1, 38) = 26.83, p < .001, ηp2 = .41). Reliability ratings dropped from 87.03% (SD = 12.30%) to 74.79% (SD = 15.56%), accordingly trust ratings declined from 82.65% (SD = 13.16%) to 73.72% (SD = 15.76%). No effects of anthropomorphism, nor an interaction effect became evident for the reliability or trust items prior to and after the robot's failure (for reliability: anthropomorphism, F(1, 38) = 3.91, p = .055, ηp2 = .09; interaction, F(1, 38) = 1.06, p = .310, ηp2 = .02 / for trust: anthropomorphism, F(1, 38) = 2.39, p = .130, ηp2 = .05; interaction, F(1, 38) = 0.83, p = .367, ηp2 = .02).

Fig. 7.

Fig. 7. Single items perceived reliability (left part) and subjective trust (right part) directly before and after failure experience in the last block. Depicted are means with according standard errors.

Trust behavior was again assessed by handover time. Two participants could not be included in the analyses due to missing recordings. The comparison of pre- and post-failure times revealed a main effect of failure experience (F(1, 36) = 11.02, p = .002, ηp2 = .23). Prior to the robot's handover failure, participants spent 2.63s in the handover area (SD = 0.60s), this time increased to 2.92s (SD = .55s) after the negative experience. This finding was further explained by an interaction effect (F(1, 38) = 6.35, p = .016, ηp2 = .15). Whereas the failure experience had hardly any impact on participants in the non-anthropomorphic condition (pre-failure: Mnon-anthro = 2.75s, SDnon-anthro = 0.68s; post-failure: Mnon-anthro = 2.82s, SDnon-anthro = 0.55s), the failure resulted in an increase in handover time for participants working together with the anthropomorphic robot (pre-failure: Mnon-anthro = 2.51s, SDnon-anthro = 0.50s; post-failure: Mnon-anthro = 3.02s, SDnon-anthro = 0.55s; Figure 8).

Fig. 8.

Fig. 8. Handover time directly before and after failure experience in the last block. Depicted are means and standard errors.

Skip 4DISCUSSION Section

4 DISCUSSION

The main objective of this study was to examine the effects of anthropomorphic robot design on trust and visual attention in an industrial work setting. On the one hand, we assumed that an anthropomorphic robot design might increase trust in the industrial robot, comparable to findings from social HRI. On the other hand, we expected negative effects of an anthropomorphic robot design, as it might inappropriately alter patterns of visual attention in HRI. Results revealed some surprising findings.

First of all, our manipulation check revealed that both robot designs resulted in a comparable perception regarding the anthropomorphism, human-likeness, likeability, intelligence and safety scales. Participants rated anthropomorphism as well as human-likeness and animacy of both robot designs rather low compared to the high ratings of likeability, intelligence and safety. A rather low level of perceived anthropomorphism for both robot designs was not unexpected because Sawyer is an industrial single arm robot that does not resemble very human-like associations. Our chosen anthropomorphic manipulation was applied by implementing a relatively abstract face on a display. This consisted of eyes and according eyebrows, that simulated (random) gaze behavior. In general, an anthropomorphic implementation via face seems to be a valid manipulation, as face features are known to be a central informational component in human-human interaction that also give clues about the mental state of interaction partners [28, 70]. We further decided for the relatively subtle form of manipulation as it depicts the realistic implementation of anthropomorphism in industrial HRI. That this is of practical relevance, is proven by the manufacturer Rethink Robotics as the two face variations are already available in the robot's standard configuration. However, we still would have expected distinct perceptions of anthropomorphism in our two conditions. In this regard, and if a direct transferability of results to an industrial application is not the main focus of interest, future studies should consider to use more, and more salient anthropomorphic features like complete faces (instead of our abstract version) or vary not only the appearance but also communicational aspects of appearance like facial expressions (especially emotions).

Even though the perception of anthropomorphism did not show the expected distinct effects, the two robot designs affected to some extend trust attitudes and revealed effects for attention allocation. Therefore, besides our subtle manipulation of anthropomorphism, an alternative explanation for the missing differences might be related to the chosen instruments to assess perceived anthropomorphism. In line with the majority of researchers we chose the Godspeed questionnaire (over 1,000 citations [6]) and a subscale of its revised version [32]. This questionnaire is widely accepted and has been validated in several studies. A distinctive feature, however, is that all validation studies have exclusively used different robots and have not included a comparison of a more or less anthropomorphic robot. In accordance, the Godspeed questionnaire has also been subsequently applied in studies to compare different robots, not one robot that is varied with regard to its anthropomorphic degree. It might be possible, that this measure is not sensitive enough for subtler differences of anthropomorphism like in our study. First studies, including this one, that manipulated anthropomorphism using the same robot support this idea. In these studies, anthropomorphism was either varied by framing a robot anthropomorphically [50] or by additionally changing the appearance (same manipulation as in this study, [58]). Whereas anthropomorphism showed an impact on objective and subjective measures (e.g. donation behavior or trust attitude), again no differences were found with respect to the Godspeed questionnaire. These results suggest that future studies are needed to determine if our non-findings with regard to anthropomorphism are due to an unsuccessful manipulation or whether the results are due to a lack of sensitivity of said measurement instrument.

However, even without this evidence for a distinct perception of anthropomorphism, trust and attention were affected by the two different robot designs. Findings are discussed and detailed in the following.

In the current study, we assessed aspects of trust attitude with three different variables. As expected, measures were significantly correlated while assessing different aspects of trust. The trust questionnaire by Charalambous and colleagues [13] measures factors impacting trust formation but not the actual trust level. Therefore, we decided to include a single question directly asking for participants’ trust. We further asked for the perceived overall reliability of the robot with a single item.

With regard to our first hypothesis, stating that an anthropomorphic robot design leads to higher trust levels, this assumption has to be rejected. The anthropomorphic design had no significant impact on propositions of trust nor on the actual trust level. In addition, results for the perceived overall reliability of the robot even showed a negative impact of anthropomorphism. Participants consistently perceived the anthropomorphic robot to be less reliable compared to the non-anthropomorphic robot. This was already present before interacting with the robot (just by seeing it) and remained with an ongoing positive interaction experience, which was exactly the same in both conditions.

Whereas the perceived reliability of the robot was unaffected by positive interactions, this changed for the other two trust attitude measures (trust questionnaire and single item trust). Trust ratings prior to interacting with the robot were significantly lower than ratings after first successful interaction experiences. Results thereby revealed an adaptive trust formation, based on the visual first impression of the robot, that was adjusted to the actual experience of interaction [41].

Measures of trust behavior did not differ between conditions, but again disclosed the process of trust formation as participants’ time in the handover area decreased with increasing interactions. Apparently, participants started to trust the robot in its capability to fulfill the handover task (as revealed by trust attitude measures), thereby enabling an efficient coordination of their behavior to the robot's actions.

With regard to the impact of anthropomorphism, the observation that differences between the two groups were only found for reliability ratings is instructive to what might be important for trust formation. The trust questionnaire [13] primarily focuses on rational aspects of the interaction (e.g. the robot's speed or gripper reliability) and accordingly primes a cognitive evaluation of the robot's trustworthiness. When asked to rate their overall trust and reliability in the robot, participants in contrast might not only base their judgement on hard facts but also on affective aspects as well as on contextual factors [52, 60, 64]. Especially the latter might play an important role for the robot's perceived reliability. Contextual factors might explain the negative perception of the anthropomorphic robot as it does not seem to fit into an industrial setting in which people normally expect to interact with professional machines and not with human-like robots. Nevertheless, this tendency towards a negative robot perception in terms of trust did not translate to the actual behavior in interaction with the robot. This is in line with previous research revealing that trust might differ with regard to trust attitude and trust behavior [53, 56]. Therefore, our results can be understood as support for the importance of assessing trust on multifaceted levels. Future studies should therefore continue to address both trust levels and attempt to identify factors that determine when trust attitude guides behavior and when it does not.

Our second hypothesis stated that an anthropomorphic robot design leads to a more forgiving attitude towards the robot. This should result in a less pronounced trust decline after the experience of a robot failure. Surprisingly, neither trust attitude nor trust behavior measures supported this assumption.

First of all, failure experience did have a significant impact on trust attitude as revealed by all trust attitude measures. Trust behavior was also affected by the negative interaction experience with the robot, resulting in longer handover times after the handover failure. The main effect was further explained by an interaction of failure experience and anthropomorphism. In the anthropomorphic condition, handover times prior to the robot's failure were shorter compared to the non-anthropomorphic condition. This difference before the robot failure might be an indicator for higher behavioral trust after a prolonged time of successful interactions with the anthropomorphic robot. However, the following negative experience impacted handover times stronger in this condition, so that participants working together with the human-like robot spent more time for the handover procedure post-failure compared to the non-anthropomorphic group. This reveals that even after the robot made a mistake, this did not affect participants’ coordination with and adaption to the robot's actions in the non-anthropomorphic group, but did alienate participants interacting with the human-like robot to some extent as they invested more time into the handover again to ensure a safe box transfer. Therefore, in contrast to our hypothesis, anthropomorphism in HRI not only missed positive effects but even seems to have had a detrimental effect on trust, in this case trust dissolution [41]. This negative effect of anthropomorphism on trust dissolution after the experience of a single failure is in line with results by Ahmad et al. [3] who also reported a negative effect on trust in a robot with low error rates. Whereas Ahmad et al. revealed this negative effect for trust attitude, the current study found the effect for trust behavior.

To summarize findings on the impact of non-instrumental anthropomorphic robot design on trust, it seems that robots as tools are perceived more trustworthy in a work-related setting compared to the presentation as human-like machines. An anthropomorphic robot design did not reveal any beneficial effects, neither with regard to trust attitude nor trust behavior. This finding is in clear contrast to previous research on the effect of anthropomorphism, at least in social HRI [24, 55, 72]. Therefore, more research is needed to understand results in more detail. One aspect that needs further consideration is with respect to our non-anthropomorphic condition. In the current study, we realized anthropomorphism by displaying an abstract face and compared this to the presentation of the company's logo. Although the brand logo was visible in both conditions as it is part of the physical robot design (e.g. name “Sawyer” at the torso, brand logo at the robot's arm), the additional emphasis of the logo could have affected results. From brand research, it is known that branding has a positive impact on the perception and evaluation of products (at least with trusted brands [2, 16, 54, 65]). Based on our study we cannot differentiate if anthropomorphism had an explicitly negative impact on the robot's perception or if the difference was due to a positive gain in the other condition due to the salient presentation of the brand logo. Therefore, to identify the cause of the observed differences in reliability perceptions, it would be interesting to additionally include a control group to the current study design with a blank display as a reference group in following studies.

Although more research is clearly needed on the impact of anthropomorphism in industrial HRI, our results indicate that a purely functional robot design might better correspond to people's expectations of a highly reliable and dependable machine in an industrial domain. Compared to findings from social HRI, this highlights the importance of the context as a possible moderating factor in shaping the impact of anthropomorphic design on trust in HRI.

In addition to trust as a central variable, the current study also focused on the influence of anthropomorphic design on visual attention. Our last hypothesis stated that human-like attributes of a robot, in our case the robot's face, should change attentional patterns in interaction and lead to an attentional shift towards the robot's face. Eye-tracking data supported this hypothesis. Looking at the results it becomes evident that people interacting with the anthropomorphic robot ascribed the head-mounted display far more importance compared to participants in the non-anthropomorphic condition throughout the experiment. At the same time, participants of the anthropomorphic group did not look as often to the handover area as the non-anthropomorphic group. These findings are in line with previous research by Bae and Kim [4], revealing that anthropomorphically designed robots evoke a higher degree of visual attentiveness compared to robots with a machine-like design. Specifically, our results confirm that the presence of a human-like face encourages humans to direct their gaze towards the robot in a comparable manner to human-human interaction. In interpersonal interactions, such as the handover of an object, people also seek eye contact to coordinate their movements and to predict the actions of one's counterpart [10, 62, 71]. The high number of fixations indicates that even robot eyes that look rather unrealistic and caricatural might lead to a preferred processing as we know it from human-human interaction [20]. To trigger this highly overlearned mechanism it seems to be sufficient to simply provide prototypical features of a human face, such as eyebrows and black spots.

An additional aspect that may have contributed to the attentional pattern found in our experiment could be the simple animation of the robotic eyes. The animation might have been more engaging compared to the non-animated logo display. This could have promoted stimulus-driven bottom-up preferred information processing. Consequently, one could argue that results might not only follow from our anthropomorphism manipulation, but could be affected by the saliency of movements, too. However, results for fixation time did not reveal any differences between the two anthropomorphic conditions, which indicates that both robot designs were comparably engaging for participants [53]. Moreover, the specific pattern of visual distribution also argues for anthropomorphism as its driving force. If the animation had been the main impact factor, we would have expected a comparable reduction of attention from all remaining AOIs in the anthropomorphic condition. But instead, attention was only withdrawn from the handover AOI. This indicates that the changed attentional pattern was due to differences in the sequence of the handover. Instead of looking at the robot's arm and the box when preparing for receiving the box, participants in the anthropomorphic condition most likely applied a schema from human-human interaction by seeking eye contact to coordinate. Following this interpretation, results highlight the importance to differentiate between non-instrumental and instrumental anthropomorphic robot design. In the current study, anthropomorphism was implemented as a design element but not as a functional feature. The robotic eyes neither showed mutual or deictic gaze [1]. As participants might have expected this functional component of eyes, they might have tried to engage in joint attention with the robot. Therefore, if features like a face should be implemented in robot design at all, these features should at least be instrumental, i.e. support the task at hand to positively affect HRI [47, 69].

Taken together, the findings for visual attention did reveal a distracting effect of non-instrumental dynamic face features of a robot. To further understand these effects and the underlying driving mechanisms results call for further investigation of instrumental and non-instrumental anthropomorphism in HRI. More specifically, studies should compare static face representations (always non-instrumental) with dynamic non-instrumental and dynamic instrumental robot faces to gain further insights to differential effects of anthropomorphic design in HRI.

With regard to methodological aspects, our study identified several challenges that need to be addressed in future research on anthropomorphism for an industrial context. Considering the industrial domain, subtle manipulations of anthropomorphism are needed as only these represent realistic conditions and therefore a transferability of results. At the same time, these subtle manipulations still have to be proven successful which should be mirrored in according manipulation checks. This highlights the need for sensitive measures of anthropomorphism as studies focusing on a work-related context will most probably use variations of industrial robots instead of completely different robots. If it is not clear if existing measures are sufficiently sensitive, it will always be in question if results are due to an unsuccessful manipulation or due to a lack of sensitivity in variables. This is exactly the case for the reported study.

Moreover, our interpretation of results for trust attitude are based on mixed findings and only some results with regard to our anthropomorphism manipulation reached the conventional level of significance whereas others just missed it (e.g. p = .062 for single item trust, p = .055 for perceived reliability after failure experience). This partial lack of statistical support might be due to a relatively small sample size of participants. Although we calculated the sample size beforehand, we might have overestimated the expected effect sizes and therefore underestimated the appropriate sample size for the study design.

Therefore, further studies are needed to gain more insights on the impact of a human-like robot design on trust in a work-related setting. Concerning effects of anthropomorphic design on visual attention future research is needed to validate our results and further differentiate between non-instrumental and instrumental anthropomorphic features. In sum, this study was one of the first to investigate anthropomorphic robot features in an industrial HRI and findings call for more research and empirical findings to determine, if the current study results are a first indicator for a serious (negative) impact of non-instrumental anthropomorphic design on trust and attention in an industrial context.

Skip 5CONCLUSIONS Section

5 CONCLUSIONS

Our study illustrates that anthropomorphic robot design might not be the universal remedy to ease HRI and promote trust in robots. The successful implementation of anthropomorphic design features is highly sensitive for the context and the functionality fostered by the design.

In our hypotheses we stated that a human-like robot design should be beneficial for HRI in terms of trust. At the same time, we assumed that it could be detrimental as anthropomorphic features distract the human from work areas that are more task relevant. Results were not in favor of this supposed tradeoff between benefits and costs of anthropomorphism. We could not find any benefits of an anthropomorphic robot design in the current study. The human-like features did not have the intended positive effects known from social HRI and even showed a negative impact on the perceived reliability of the robot. Additionally, the robot's human-like face shifted participants’ visual attention away from task-relevant areas. In real work environments, this could be critical in terms of occupational safety. In HRI work settings that incorporate the handling of welding equipment, heavy load, hot pieces or any other safety-critical component, people have to be focused on the vulnerable task steps and not on a robotic face that does not reveal relevant task information. Therefore, anthropomorphic design should not be generally applied to industrial robots if it does not support the task.

These findings and recommendations are in contrast to numerous positive results of anthropomorphic design in social HRI. This suggests a moderating factor that shapes the effect of anthropomorphic design on trust: The context. In social HRI, the main task of human and robot is the interaction itself. Therefore, an anthropomorphic robot design supports this task as it introduces more social cues to the interaction. In an industrial setting, the interaction of human and robot might be a necessary step in task fulfillment but not the task goal itself. Therefore, anthropomorphism might not always be beneficial if it does not clearly ease the task at hand. Results of our study support this need of context differentiation, but need further validation.

Whereas the body of HRI research is growing, a systematic consideration of such meta variables has not gained much attention yet, but is crucial to our understanding and interpretation of conflicting findings. Taken together, it seems that anthropomorphism is a mixed blessing in HRI. More research is needed to specify the conditions that determine in which cases anthropomorphism in robot design is beneficial or detrimental to HRI.

REFERENCES

  1. [1] Admoni Henny and Scassellati Brian. 2017. Social eye gaze in human-robot interaction: A review. J. Human-Robot Interact 6, 1 (2017), 2563. DOI: https://doi.org/10.5898/jhri.6.1.admoniGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Aghekyan-Simonian Mariné, Forsythe Sandra, Kwon Wi Suk, and Chattaraman Veena. 2012. The role of product brand image and online store image on perceived risks and online purchase intentions for apparel. J. Retail. Consum. Serv 19, 3 (2012), 325331. DOI: https://doi.org/10.1016/j.jretconser.2012.03.006Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Ahmad Muneeb Imtiaz, Bernotat Jasmin, Lohan Katrin, and Eyssel Friederike. 2019. Trust and cognitive load during human-robot interaction. In Proceedings of the AAAI Symposium on Artificial Intelligence for Human-Robot Interaction, 10 Pages. Retrieved from http://arxiv.org/abs/1909.05160.Google ScholarGoogle Scholar
  4. [4] Bae Jae-Eul and Kim Myung-Suk. 2011. Selective visual attention occurred in change detection derived by animacy of robot's appearance. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11), 190193. DOI: https://doi.org/10.1109/CTS.2011.5928686Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bartneck Christoph and Forlizzi Jodi. 2004. A design-centred framework for social human-robot interaction. In Proceedings of the 2004 IEEE International Workshop in Robot and Human Interactive Communication (RO-MAN 2004), IEEE, Kurashiki, Okayama, Japan, 591594. DOI: https://doi.org/10.1109/ROMAN.2004.1374827Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Bartneck Christoph, Kulić Dana, Croft Elizabeth, and Zoghbi Susana. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, (2009), 7181. DOI: https://doi.org/10.1007/s12369-008-0001-3Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bernotat Jasmin and Eyssel Friederike. 2018. Can(‘t) wait to have a robot at home? Japanese and German users’ attitudes toward service robots in smart homes. In Proceedings of the 2018 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, 1522. DOI: https://doi.org/10.1109/ROMAN.2018.8525659Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Blignaut Pieter. 2009. Fixation identification: The optimum threshold for a dispersion algorithm. Attention, Perception, Psychophys. 71, 4 (2009), 881895. DOI: https://doi.org/10.3758/APP.71.4.881Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Bortz Jürgen. 2006. Statistik: Für Human-und Sozialwissenschaftler (6th ed.). Springer Medizin Verlag.Google ScholarGoogle Scholar
  10. [10] Boucher Jean-David, Pattacini Ugo, Lelong Amelie, Bailly Gerrard, Elisei Frederic, Fagel Sascha, Dominey Peter Ford, and Ventre-Dominey Jocelyne. 2012. I reach faster when I see you look: Gaze effects in human–human and human–robot face-to-face cooperation. Front. Neurorobot. 6, 3 (2012), 111. DOI: https://doi.org/10.3389/fnbot.2012.00003Google ScholarGoogle Scholar
  11. [11] Burke Jenny, Coovert Michael, Murphy Robin, Riley Jennifer, and Rogers Erika. 2006. Human-robot factors: Robots in the workplace. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 50, 9 (2006), 870874. DOI: https://doi.org/10.1177/154193120605000902Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Castro-González Álvaro, Admoni Henny, and Scassellati Brian. 2016. Effects of form and motion on judgments of social robots’ animacy, likability, trustworthiness and unpleasantness. Int. J. Hum. Comput. Stud. 90, (2016), 2738. DOI: https://doi.org/10.1016/j.ijhcs.2016.02.004Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Charalambous George, Fletcher Sarah, and Webb Philip. 2016. The development of a scale to evaluate trust in industrial human-robot collaboration. Int. J. Soc. Robot. 8, 2 (2016), 193209. DOI: https://doi.org/10.1007/s12369-015-0333-8Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Darling Kate. 2017. “Who's Johnny?” Anthropomorphic framing in human-robot: Interaction, integration, and policy. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, Lin P., Abney K. and Jenkins R. (eds.). Oxford University Press, 173188. DOI: https://doi.org/10.1093/oso/9780190652951.003.0012Google ScholarGoogle Scholar
  15. [15] Darling Kate, Nandy Palash, and Breazeal Cynthia. 2015. Empathic concern and the effect of stories in human-robot interaction. In Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, Kobe, Japan, 770775. DOI: https://doi.org/10.1109/ROMAN.2015.7333675Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Dodds William B., Monroe Kent B., and Grewal Dhruv. 1991. Effects of price, brand, and store information on buyers’ product evaluations. J. Mark. Res 28, 3 (1991), 307319. DOI: https://doi.org/10.1177/002224379102800305Google ScholarGoogle Scholar
  17. [17] Duffy Brian R.. 2003. Anthropomorphism and the social robot. Rob. Auton. Syst. 42, 3–4 (2003), 177190. DOI: https://doi.org/10.1016/S0921-8890(02)00374-3Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Dzindolet Mary T., Peterson Scott A., Pomranky Regina A., Pierce Linda G., and Beck Hall P.. 2003. The role of trust in automation reliance. Int. J. Hum. Comput. Stud 58, 6 (2003), 697718. DOI: https://doi.org/10.1016/S1071-5819(03)00038-7Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Eyssel Friederike A. and Pfundmair Michaela. 2015. Predictors of psychological anthropomorphization, mind perception, and the fulfillment of social needs: A case study with a zoomorphic robot. In Proceedings of the 24th IEEE International Workshop on Robot and Human Interactive Communication, IEEE, Kobe, Japan, 827832. DOI: https://doi.org/10.1109/ROMAN.2015.7333647Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Farroni Teresa, Csibra Gergely, Simion Francesca, and Johnson Mark H.. 2002. Eye contact detection in humans from birth. Proc. Natl. Acad. Sci. U. S. A. 99, 14 (2002), 96029605. DOI: https://doi.org/10.1073/pnas.152159999Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Faul Franz, Erdfelder Edgar, Buchner Axel, and Lang Albert Georg. 2009. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 41, 4 (2009), 11491160. DOI: https://doi.org/10.3758/BRM.41.4.1149Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Victoria Feuerberg Beatrice, Bahner Jennifer Elin, and Manzey Dietrich. 2005. Interindividuelle unterschiede im umgang mit automation – entwicklung eines fragebogens zur erfassung des complacency-potentials. In Zustandserkennung und Systemgestaltung: 6. Berliner Werkstatt Mensch-Maschine-Systeme. ZMMS, Berlin, Germany, 199202.Google ScholarGoogle Scholar
  23. [23] Field Andy. 2005. Discovering Statistics Using SPSS (2nd ed.). SAGE Publications, London, UK.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Fong Terrence, Nourbakhsh Illah, and Dautenhahn Kerstin. 2003. A survey of socially interactive robots. Rob. Auton. Syst. 42, 3–4 (2003), 143166. DOI: https://doi.org/10.1016/S0921-8890(02)00372-XGoogle ScholarGoogle ScholarCross RefCross Ref
  25. [25] Franke Thomas, Attig Christiane, and Wessel Daniel. 2019. A personal resource for technology interaction: Development and validation of the affinity for technology interaction (ATI) scale. Int. J. Hum. Comput. Interact. 35, 6 (2019), 456467. DOI: https://doi.org/10.1080/10447318.2018.1456150Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Freedy Amos, Visser Ewart De, Weltman Gershon, and Coeyman Nicole. 2007. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems (CTS), Orlando, Florida, USA, 106114. DOI: https://doi.org/10.1109/CTS.2007.4621745Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Goetz Jennifer, Kiesler Sara, and Powers Aaron. 2003. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, IEEE, Millbrae, California, USA, 5560. DOI: https://doi.org/10.1109/ROMAN.2003.1251796Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Goldin-Meadow Susan. 1999. The role of gesture in communication and thinking. Trends Cogn. Sci. 3, 11 (1999), 419429. DOI: https://doi.org/10.1016/S1364-6613(99)01397-2Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Hancock Peter A., Billings Deborah R., Schaefer Kristin E., Chen Jessie Y. C., Visser Ewart J. De, and Parasuraman Raja. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53, 5 (2011), 517527. DOI: https://doi.org/10.1177/0018720811417254Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Haring Kerstin Sophie, Silvera-Tawil David, Watanabe Katsumi, and Velonaki Mari. 2016. The influence of robot appearance and interactive ability in HRI: A cross-cultural study. In Proceedings of the 8th International Conference on Social Robotics, Lecture Notes in Computer Science Vol. 9979, Springer, Kansas City, MO, USA, 392401. DOI: https://doi.org/10.1007/978-3-319-47437-3_38Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Hildebrandt Clara Laudine, Roesler Eileen, and Onnasch Linda. 2020. Anthropomorphism: Challenge or opportunity? The Influence of Anthropomorphic Design Elements of an Industrial Robot on Human Trust and Visual Attention Allocation in HRI. DOI: https://doi.org/10.17605/OSF.IO/R8YMBGoogle ScholarGoogle Scholar
  32. [32] Ho Chin Chang and MacDorman Karl F.. 2010. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Comput. Human Behav 26, 6 (2010), 15081518. DOI: https://doi.org/10.1016/j.chb.2010.05.015Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Hoff Kevin Anthony and Bashir Masooda. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 57, 3 (2015), 407434. DOI: https://doi.org/10.1177/0018720814547570Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Lofaro D. M.2018 15th International, Wiese E., Weis P. P., and Undefined. 2018. Embodied social robots trigger gaze following in real-time HRI. Ieeexplore.IEEE.Org. Retrieved July 2, 2020 from https://ieeexplore.ieee.org/abstract/document/8441825/.Google ScholarGoogle Scholar
  35. [35] Kassner Moritz, Patera William, and Bulling Andreas. 2014. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Association for Computing Machinery, New York, New York, USA, 11511160. DOI: https://doi.org/10.1145/2638728.2641695Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Kiesler Sara, Powers Aaron, Fussell Susan R., and Torrey Cristen. 2008. Anthropomorphic interactions with a robot and robot-like agent. Soc. Cogn 26, 2 (2008), 169181. DOI: https://doi.org/10.1521/soco.2008.26.2.169Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Kuz Sinem, Mayer Marcel Ph., Müller Simon, and Schlick Christopher M.. 2013. Using anthropomorphism to improve the human-machine interaction in industrial environments (part i). In Proceedings of the 4th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (DHM), Part II; Lecture Notes in Computer Science, Vol. 8026, Springer, Las Vegas, NV, USA, 7685. DOI: https://doi.org/10.1007/978-3-642-39182-8_9Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Lee John D. and Moray Neville. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 12431270. DOI: https://doi.org/10.1080/00140139208967392Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Lee John D. and See Katrina A.. 2004. Trust in automation: Designing for appropriate reliance. Hum. Factors 46, 1 (2004), 5080. DOI: https://doi.org/10.1518/hfes.46.1.50_30392Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] David Lewis J. and Weigert Andrew. 1985. Trust as a social reality. Soc. Forces 63, 4 (1985), 967985. DOI: https://doi.org/10.1093/sf/63.4.967Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Lewis Michael, Sycara Katia, and Walker Phillip. 2018. The role of trust in human-robot interaction. In Foundations of Trusted Autonomy: Studies in Systems, Decision and Control (117th ed.), Hussein Abbas, Jason Scholz and Darryn J. Reid (eds.). Springer, 135159. DOI: https://doi.org/10.1007/978-3-319-64816-3_8Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Malle Bertram F. and Scheutz Matthias. 2016. Inevitable psychological mechanisms triggered by robot appearance: Morality included? In Proceedings of the 2016 AAAI Spring Symposium Series: Technical Reports, AAAI, Palo Alto, CA, USA, 144146.Google ScholarGoogle Scholar
  43. [43] Malle Bertram F., Scheutz Matthias, Forlizzi Jodi, and Voiklis John. 2016. Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction, IEEE, Christchurch, New Zealand, 125132. DOI: https://doi.org/10.1109/HRI.2016.7451743Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Manzey Dietrich, Reichenbach Juliane, and Onnasch Linda. 2012. Human performance consequences of automated decision aids: The impact of degree of automation and system experience. J. Cogn. Eng. Decis. Mak. 6, 1 (2012), 5787. DOI: https://doi.org/10.1177/1555343411433844Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Martelaro Nikolas, Nneji Victoria C., Ju Wendy, and Hinds Pamela. 2016. Tell me more: Designing HRI to encourage more trust, disclosure, and companionship. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, Christchurch, New Zealand, 181188. DOI: https://doi.org/10.1109/HRI.2016.7451750Google ScholarGoogle Scholar
  46. [46] Mayer Marcel Ph., Kuz Sinem, and Schlick Christopher M.. 2013. Using anthropomorphism to improve the human-machine interaction in industrial environments (part II). In Proceedings of the 4th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (DHM), Part II; Lecture Notes in Computer Science, Vol. 8026, Springer, Las Vegas, NV, USA, 93100. DOI: https://doi.org/10.1007/978-3-642-39182-8_11Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Moon Ajung, Zheng Minhua, Troniak Daniel M., Blumer Benjamin A., Gleeson Brian, Maclean Karon, Pan Matthew K. X. J., and Croft Elizabeth A.. 2014. Meet me where I'm gazing: How shared attention gaze affects human-robot handover timing. (2014). DOI: https://doi.org/10.1145/2559636.2559656Google ScholarGoogle Scholar
  48. [48] Moray Neville and Inagaki Toshiyuki. 1999. Laboratory studies of trust between humans and machines in automated systems. Trans. Inst. Meas. Control 21, 5 (1999), 203211. DOI: https://doi.org/10.1177/014233129902100408Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Nyström Marcus and Holmqvist Kenneth. 2010. An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behav. Res. Methods 42, 1 (2010), 188204. DOI: https://doi.org/10.3758/BRM.42.1.188Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Onnasch Linda and Roesler Eileen. 2019. Anthropomorphizing robots: The effect of framing in human-robot collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 63, 1 (2019), 13111315. DOI: https://doi.org/10.1177/1071181319631209Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Onnasch Linda and Roesler Eileen. 2020. A taxonomy to structure and analyze human–robot interaction. Int. J. Soc. Robot. 3 (2020), 117. DOI: https://doi.org/10.1007/s12369-020-00666-5Google ScholarGoogle Scholar
  52. [52] Benites Paradeda Raul, Hashemian Mojgan, Rodrigues Rafael Afonso, and Paiva Ana. 2016. How facial expressions and small talk may influence trust in a robot. In Proceedings of the 8th International Conference on Social Robotics; Lecture Notes in Artificial Intelligence, Vol. 9979, Springer, Kansas City, MO, USA, 169178. DOI: https://doi.org/10.1007/978-3-319-47437-3_17Google ScholarGoogle Scholar
  53. [53] Poole Alex and Ball Linden J.. 2005. Eye tracking in HCI and usability research. In Encyclopedia of Human Computer Interaction, Claude Ghaoui (ed.). IGI Global, 211219. DOI: https://doi.org/10.4018/978-1-59140-562-7.ch034Google ScholarGoogle Scholar
  54. [54] Rao Akshay R. and Monroe Kent B.. 1989. The effect of price, brand name, and store name on buyers’ perceptions of product quality: An integrative review. J. Mark. Res 26, 3 (1989), 351357. DOI: https://doi.org/10.1177/002224378902600309Google ScholarGoogle Scholar
  55. [55] Riek Laurel D., Rabinowitch Tal-Chen, Chakrabartiz Bhismadev, and Robinson Peter. 2009. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, IEEE, Amsterdam, Netherlands, 16. DOI: https://doi.org/10.1109/ACII.2009.5349423Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Riek Laurel D., Rabinowitch Tal-Chen, Bremner Paul, Pipe Anthony G., Fraser Mike, and Robinson Peter. 2010. Cooperative gestures: Effective signaling for humanoid robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, Osaka, Japan, 6168. DOI: https://doi.org/10.1109/HRI.2010.5453266Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Robinette Paul, Li Wenchen, Allen Robert, Howard Ayanna M., and Wagner Alan R.. 2016. Overtrust of robots in emergency evacuation scenarios. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction, IEEE, Christchurch, New Zealand, 101108. DOI: https://doi.org/10.1109/HRI.2016.7451740Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Roesler Eileen and Onnasch Linda. 2020. The effect of anthropomorphism and failure comprehensibility on human-robot trust. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 107111. DOI: https://doi.org/10.1177/1071181320641028Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Rosenthal-von der Pütten Astrid M. and Krämer Nicole C.. 2014. How design characteristics of robots determine evaluation and uncanny valley related responses. Comput. Human Behav 36, (2014), 422439. DOI: https://doi.org/10.1016/j.chb.2014.03.066Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Salem Maha, Lakatos Gabriella, Amirabdollahian Farshid, and Dautenhahn Kerstin. 2015. Would you trust a (faulty) robot? In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI’15), ACM, 141148. DOI: https://doi.org/10.1145/2696454.2696497Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Salvucci Dario D. and Goldberg Joseph H.. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the Eye Tracking Research and Applications Symposium 2000 (ETRA’00), ACM, Palm Beach Gardens, FL, USA, 7178. DOI: https://doi.org/10.1145/355017.355028Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Skantze Gabriel, Hjalmarsson Anna, and Oertel Catharine. 2014. Turn-taking, feedback and joint attention in situated human-robot interaction. Speech Commun. 65, November-December (2014), 5066. DOI: https://doi.org/10.1016/j.specom.2014.05.005Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Syrdal Dag Sverre, Dautenhahn Kerstin, Koay Kheng Lee, and Walters Michael L.. 2009. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. In Proceedings of the 23rd Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2009), 109115. Retrieved March 20, 2020 from http://uhra.herts.ac.uk/handle/2299/9641.Google ScholarGoogle Scholar
  64. [64] Tapus Adrian, Mataric Maja J., and Scassellati Brian. 2007. Socially assistive robotics: The grand challenges in helping humans through social interaction. IEEE Robot. Autom. Mag 14, 1 (2007), 3542. DOI: https://doi.org/10.1109/MRA.2007.339605Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Kenneth Teas R. and Agarwal Sanjeev. 2000. The effects of extrinsic product cues on consumers’ perceptions of quality, sacrifice, and value. J. Acad. Mark. Sci 28, 2 (2000), 278290. DOI: https://doi.org/10.1177/0092070300282008Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Tschöpe Nico, Reiser Julian Elias, and Oehl Michael. 2017. Exploring the uncanny valley effect in social robotics. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17), ACM, Vienna, Austria, 307308. DOI: https://doi.org/10.1145/3029798.3038319Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Wang Shensheng, Lilienfeld Scott O., and Rochat Philippe. 2015. The uncanny valley: Existence and explanations. Rev. Gen. Psychol 19, 4 (2015), 393407. DOI: https://doi.org/10.1037/gpr0000056Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Waytz Adam, Cacioppo John, and Epley Nicholas. 2010. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci 5, 3 (2010), 219232. DOI: https://doi.org/10.1177/1745691610369336Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Wiese Eva, Weis Patrick P., and Lofaro Daniel M.. 2018. Embodied social robots trigger gaze following in real-time HRI. 2018 15th Int. Conf. Ubiquitous Robot. UR 2018 (2018), 477482. DOI: https://doi.org/10.1109/URAI.2018.8441825Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Wiese Eva, Wykowska Agnieszka, Zwickel Jan, and Müller Hermann J.. 2012. I see what you mean: How attentional selection is shaped by ascribing intentions to others. PLoS One 7, 9 (2012), e45391. DOI: https://doi.org/10.1371/journal.pone.0045391Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Yu Chen, Scheutz Matthias, and Schermerhorn Paul. 2010. Investigating multimodal real-time patterns of joint attention in an HRI word learning task. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, Osaka, Japan, 309316. DOI: https://doi.org/10.1109/hri.2010.5453181Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Złotowski Jakub, Proudfoot Diane, Yogeeswaran Kumar, and Bartneck Christoph. 2015. Anthropomorphism: Opportunities and challenges in human–robot interaction. Int. J. Soc. Robot 7, (2015), 347360. DOI: https://doi.org/10.1007/s12369-014-0267-6Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Human-Robot Interaction
      ACM Transactions on Human-Robot Interaction  Volume 11, Issue 1
      March 2022
      314 pages
      EISSN:2573-9522
      DOI:10.1145/3485160
      Issue’s Table of Contents

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 October 2021
      • Accepted: 1 June 2021
      • Revised: 1 May 2021
      • Received: 1 May 2020
      Published in thri Volume 11, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format