Skip to main content

Advertisement

Log in

The Hitchhiker’s Guide to a Credible and Socially Present Robot: Two Meta-Analyses of the Power of Social Cues in Human–Robot Interaction

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Social cues have been construed as an important concept in human–robot interaction, as they can be manipulated to reflect robots’ perceived genders, personalities, emotions, identities, and so on. This study seeks to understand the overall effects of social cues and applies two meta-analyses to explore a hierarchy of social cues that elicits different degrees of users’ social responses. A total of 25 and 44 effect sizes were calculated to represent the respective magnitudes of the effects of social cues on users’ social presence (N = 2498) and trust in social robots (N = 4147). Results suggested that although the overall effects of social cues were small, manipulating social robots’ facial and kinetic cues can induce medium-to-large-sized effects on users’ social presence and trust. In addition, the overall positive effect sizes of social cues indicated that designing humanlike, natural, and lifelike cues was effective in evoking users’ social presence and trust in social robots. The results of the two meta-analyses can contribute to the theoretical implications of the Computers are Social Actors paradigm and the practical and methodological design of human–robot interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Availability of Data and Materials

The datasets generated during the current study are available from the corresponding author on reasonable request. Some data have already been included in this article (Appendices).

Code Availability

Available upon request.

References

*means articles included in the meta-analyses

  1. *Abdulrahman A, Richards D, Bilgin A (2019) A comparison of human and machine-generated voice. In: Spencer S (ed) Proceedings—VRST 2019: 25th ACM symposium on virtual reality software and technology, vol 41, pp 1–2

  2. Abubshait A, Wiese E (2017) You look human, but act like a machine: Agent appearance and behavior modulate different aspects of human–robot Interaction. Front Psychol 8:1393

    Article  Google Scholar 

  3. Adkins M, Brashers D (1995) The power of language in computer-mediated groups. Manag Commun Q 8:289–322

    Article  Google Scholar 

  4. Andrist S, Mutlu B, Tapus A (2015) Look like me: matching robot personality via gaze to increase motivation. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, pp 3603–3612

  5. Araujo T (2018) Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav 85:183–189

    Article  Google Scholar 

  6. Banks J (2020) Theory of Mind in social robots: replication of five established human tests. Int J Soc Robot 12:403–414

    Article  Google Scholar 

  7. Barco A, de Jong C, Peter J, Kühne R, van Straten C (2020) Robot morphology and children’s perception of social robots: an exploratory study. In: 2020 ACM/IEEE international conference on human-robot interaction, pp 125–127

  8. Bartneck C, Forlizzi J (2004) A design-centered framework for social human-robot interaction. RO-MAN 2004. In: 13th IEEE international workshop on robot and human interactive communication, pp 591–594

  9. *Bevan C, Fraser D (2015) Shaking hands and cooperation in tele-present human-robot negotiation. In: Proceedings of 2015 10th ACM/IEEE international conference on human-robot interaction (HRI), pp 247–254

  10. Biocca F, Harms C, Burgoon JK (2003) Toward a more robust theory and measure of social presence: review and suggested criteria. Presence Teleoperators Virtual Environ 12:456–480

    Article  Google Scholar 

  11. Borenstein M, Hedges LV, Higgins JP, Rothstein HR (2009) Introduction to meta-analysis. Wiley

    Book  MATH  Google Scholar 

  12. Breazeal C (2003) Toward sociable robots. Robot Auton Syst 42:167–175

    Article  MATH  Google Scholar 

  13. *Burgoon JK, Bonito JA, Bengtsson B, Cederberg C, Lundeberg M, Allspach L (2000) Interactivity in human–computer interaction: a study of credibility, understanding, and influence. Comput Hum Behav 16:553–574

    Article  Google Scholar 

  14. Calvo N, Elgarf M, Perugia G, Peters C, Castellano G (2020) Can a social robot be persuasive without losing children's trust? In: 2020 ACM/IEEE international conference on human-robot interaction, pp 157–159

  15. Carolus A, Binder JF, Muench R, Schmidt C, Schneider F, Buglass SL (2019) Smartphones as digital companions: characterizing the relationship between users and their phones. New Media Soc 21:914–938

    Article  Google Scholar 

  16. *Castro-González Á, Admoni H, Scassellati B (2016) Effects of form and motion on judgments of social robots' animacy, likability, trustworthiness, and unpleasantness. Int J Hum Comput Stud 90:27–38

    Article  Google Scholar 

  17. Chen Y (2006) Olfactory display: development and application in virtual reality therapy. In: Artificial reality and tele-existence—workshops. ICAT 2006. IEEE, pp 580–584

  18. *Chérif E, Lemoine JF (2019) Anthropomorphic virtual assistants and the reactions of Internet users: an experiment on the assistant’s voice. Rech Appl Mark 34:28–47

    Google Scholar 

  19. *Chiou EK, Schroeder NL, Craig SD (2020) How we trust, perceive, and learn from virtual humans: the influence of voice quality. Comput Educ 146:103756

    Article  Google Scholar 

  20. *Cho E, Molina MD, Wang J (2019) The effects of modality, device, and task differences on perceived human-likeness of voice-activated virtual assistants. Cyberpsychol Behav Soc Netw 22:515–520

    Article  Google Scholar 

  21. Choi S, Liu SQ, Mattila AS (2019) “How may I help you?” Says a robot: examining language styles in the service encounter. Int J Hosp Manag 82:32–38

    Article  Google Scholar 

  22. Cohen J (1988) Statistical power analysis for the behavioral sciences. Routledge, New York

    MATH  Google Scholar 

  23. Cohen J, Cohen P (1983) Applied multiple regression/correlation analysis in behavioral sciences. Erlbaum

    Google Scholar 

  24. Cooper H (2015) Research synthesis and meta-analysis: a step-by-step approach. SAGE Publications

    Google Scholar 

  25. Craenen B, Deshmukh A, Foster ME, Vinciarelli A (2018) Do we really like robots that match our personality? The case of Big-Five traits, Godspeed scores and robotic gestures. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, pp 626–631

  26. Cummings JJ, Bailenson JN (2016) How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychol 19:272–309

    Article  Google Scholar 

  27. Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42:177–190

    Article  MATH  Google Scholar 

  28. *Elkins AC, Derrick DC (2013) The sound of trust: voice as a measurement of trust during interactions with embodied conversational agents. Group Decis Negot 22:897–913

    Article  Google Scholar 

  29. *Erebak S, Turgut T (2019) Caregivers’ attitudes toward potential robot coworkers in elder care. Cogn Technol Work 21:327–336

    Article  Google Scholar 

  30. Fiore SM, Wiltshire TJ, Lobato EJC, Jentsch FG, Huang WH, Axelrod B (2013) Toward understanding social cues and signals in human–robot interaction: Effects of robot gaze and proxemic behavior. Front Psychol 4:1–15

    Article  Google Scholar 

  31. Fiske ST, Taylor SE (1991) Social cognition, 2nd edn. McGraw Hill

    Google Scholar 

  32. Fox J, Ahn SJ, Janssen JH, Yeykelis L, Segovia KY, Bailenson JN (2015) Avatars versus agents: a meta-analysis quantifying the effect of agency on social influence. Hum Comput Interact 30:401–432

    Article  Google Scholar 

  33. Fox J, Gambino A (2021) Relationship development with humanoid social robots: applying interpersonal theories to human-robot interaction. Cyberpsychol Behav Soc Netw 24:294–299

    Article  Google Scholar 

  34. Gambino A, Fox J, Ratan R (2020) Building a stronger CASA: extending the computers are social actors paradigm. Hum Mach Commun 1:71–85

    Article  Google Scholar 

  35. Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S (2016) Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Comput Hum Behav 61:633–655

    Article  Google Scholar 

  36. Gauthier I, Tarr MJ (1997) Becoming a “Greeble” expert: exploring mechanisms for face recognition. Vis Res 37:1673–1682

    Article  Google Scholar 

  37. *Ghazali AS, Ham J, Barakova EI, Markopoulos P (2018) Effects of robot facial characteristics and gender in persuasive human-robot interaction. Front Robot AI 5:73

    Article  Google Scholar 

  38. *Ghazali AS, Ham J, Barakova E, Markopoulos P (2019) Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance. Adv Robot 33:325–337

    Article  Google Scholar 

  39. *Goble H, Edwards C (2018) A robot that communicates with vocal fillers has … Uhhh… greater social presence. Commun Res Rep 35:256–260

    Article  Google Scholar 

  40. Gong L (2008) How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput Hum Behav 24:1494–1509

    Article  Google Scholar 

  41. Gong L, Lai J (2003) To mix or not to mix synthetic speech and human speech? Contrasting impact on judge-rated task performance versus self-rated performance and Attitudinal Responses. Int J Speech Technol 6:123–131

    Article  Google Scholar 

  42. *Gong L, Nass C (2007) When a talking-face computer agent is half-human and half-humanoid: human identity and consistency preference. Hum Commun Res 33:163–193

    Google Scholar 

  43. Hancock PA, Billings DR, Schaefer KE, Chen JY, De Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527

    Article  Google Scholar 

  44. Hanson D, Olney A, Pereira IA, Zielke M (2005) Upending the uncanny valley. AAAI 5:24–31

    Google Scholar 

  45. Heeter C (1992) Being there: the subjective experience of presence. Presence Teleoperators Virtual Environ 1:262–271

    Article  Google Scholar 

  46. Heider F, Simmel M (1944) An experimental study of apparent behavior. Am J Psychol 57:243–259

    Article  Google Scholar 

  47. Higgins JPT, Thompson SG, Deeks JJ, Altman DG (2003) Measuring inconsistency in meta-analyses. BMJ 327:557–560

    Article  Google Scholar 

  48. Hinds PJ, Roberts TL, Jones H (2004) Whose job is it anyway? A study of human-robot interaction in a collaborative task. Hum Comput Interact 19:151–181

    Article  Google Scholar 

  49. Ho A, Hancock J, Miner AS (2018) Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. J Commun 68:712–733

    Article  Google Scholar 

  50. *Hoegen R, Aneja D, McDuff D, Czerwinski M (2019) An end-to-end conversational style matching agent. In: Proceedings of the 19th ACM international conference on intelligent virtual agents, pp 111–118

  51. *Hoffmann L, Derksen M, Kopp S (2020) What a pity, Pepper! How warmth in robots’ language impacts reactions to errors during a collaborative task. In: Companion of the 2020 ACM/IEEE international conference on human-robot interaction, pp 245–247

  52. *Hoppe M, Rossmy B, Neumann DP, Streuber S, Schmidt A, Machulla TK (2020) A human touch: social touch increases the perceived human-likeness of agents in virtual reality. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–11

  53. Horstmann AC, Bock N, Linhuber E, Szczuka JM, Straßmann C, Krämer NC (2018) Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLoS ONE 13:1–25

    Article  Google Scholar 

  54. Johansson G (1973) Visual perception of biological motion and a model for its analysis. Percept Psychophys 14:201–211

    Article  Google Scholar 

  55. Johnson BT, Eagly AH (2000) Quantitative synthesis of social psychological research. In: Reis HT, Judd CM (eds) Handbook of research methods in social and personality psychology. Cambridge University Press, pp 496–528

    Google Scholar 

  56. Johnson D, Gardner J, Wiles J (2004) Experience as a moderator of the media equation: the impact of flattery and praise. Int J Hum Comput Stud 61:237–258

    Article  Google Scholar 

  57. Kim RH, Moon Y, Choi JJ, Kwak SS (2014) The effect of robot appearance types on motivating donation. In: Proceedings of the 2014 ACM/IEEE international conference on human-robot interaction, pp 210–211

  58. Kobiella A, Grossmann T, Reid VM, Striano T (2008) The discrimination of angry and fearful facial expressions in 7-month-old infants: an event-related potential study. Cogn Emot 22:134–146

    Article  Google Scholar 

  59. Lakens D (2013) Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front Psychol 4:863

    Article  Google Scholar 

  60. Law T, Chita-Tegmark M, Scheutz M (2020) The interplay between emotional intelligence, trust, and gender in human–robot interaction. Int J Soc Robot 13:297–309

    Article  Google Scholar 

  61. Lee KM (2004) Presence explicated. Commun Theory 14:27–50

    Article  Google Scholar 

  62. Lee EJ (2010) What triggers social responses to flattering computers? Experimental tests of anthropomorphism and mindlessness explanations. Commun Res 37:191–214

    Article  Google Scholar 

  63. *Lee KM, Jung Y, Kim J, Kim SR (2006) Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. Int J Hum Comput Stud 64:962–973

    Article  Google Scholar 

  64. *Lee EJ, Nass C (1999) Effects of the form of representation and number of computer agents on conformity. In: CHI ’99 extended abstracts on human factors in computing systems, pp 238–239

  65. *Lee KM, Nass C (2005) Social psychological origins of feelings of presence: creating social presence with machine-generated voices. Media Psychol 7:31–45

    Article  Google Scholar 

  66. Lee KM, Peng W, Jin SA, Yan C (2006) Can robots manifest personality? An empirical test of personality recognition, social responses, and social presence in human–robot interaction. J Commun 56:754–772

    Article  Google Scholar 

  67. Leichtmann B, Nitsch V (2020) How much distance do humans keep toward robots? Literature review, meta-analysis, and theoretical considerations on personal space in human-robot interaction. J Environ Psychol 68:101386

    Article  Google Scholar 

  68. Li J (2015) The benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int J Hum Comput Stud 77:23–37

    Article  Google Scholar 

  69. Li J, Chignell M (2011) Communication of emotion in social robots through simple head and arm movements. Int J Soc Robot 3:125–142

    Article  Google Scholar 

  70. Li JJ, Ju W, Reeves B (2017) Touching a mechanical body: tactile contact with body parts of a humanoid robot is physiologically arousing. J Hum Robot Interact 6:118–130

    Article  Google Scholar 

  71. Li J, Kizilcec R, Bailenson J, Ju W (2016) Social robots and virtual agents as lecturers for video instruction. Comput Hum Behav 55:1222–1230

    Article  Google Scholar 

  72. Li D, Rau PLP, Li Y (2010) A cross-cultural study: effect of robot appearance and task. Int J Soc Robot 2:175–186

    Article  Google Scholar 

  73. Lombard M, Ditton T (1997) At the heart of it all: the concept of presence. J Comput Mediat Commun 3(2)

  74. Lombard M, Xu K (2021) Social responses to media technologies: the Media are Social Actors paradigm. Hum Mach Commun 2:29–55

    Article  Google Scholar 

  75. *Looije R, Neerincx MA, Cnossen F (2010) Persuasive robotic assistant for health self-management of older adults: design and evaluation of social behaviors. Int J Hum Comput Stud 68:386–397

    Article  Google Scholar 

  76. Martin D, Macrae CN (2007) A face with a cue: exploring the inevitability of person categorization. Eur J Soc Psychol 37:806–816

    Article  Google Scholar 

  77. Martini MC, Gonzalez CA, Wiese E (2016) Seeing minds in others: Can agents with robotic appearance have human-like preferences? PLoS ONE 11:e0146310

    Article  Google Scholar 

  78. Mayer RE, Sobko K, Mautone P (2003) Social cues in multimedia learning: role of speaker’s voice. J Educ Psychol 95:419–425

    Article  Google Scholar 

  79. Moher D, Liberati A, Tetzlaff J, Altman DG, Group TP (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLOS Med 6:e1000097

    Article  Google Scholar 

  80. Morewedge CK, Preston J, Wegner DM (2007) Timescale bias in the attribution of mind. J Pers Soc Psychol 93:1–11

    Article  Google Scholar 

  81. Mori M, MacDorman KF, Kageki N (2012) The uncanny valley. IEEE Robot Autom Mag 19:98–100

    Article  Google Scholar 

  82. Naneva S, Gou MS, Webb TL, Prescott TJ (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12:1179–1201

    Article  Google Scholar 

  83. Nass C (2004) Etiquette equality: Exhibitions and expectations of computer politeness. Commun ACM 47:35–37

    Article  Google Scholar 

  84. Nass C, Brave S (2005) Wired for speech: how voice activates and advances the human-computer relationship. MIT Press, Cambridge

    Google Scholar 

  85. Nass C, Fogg BJ, Moon Y (1996) Can computers be teammates? Int J Hum Comput Stud 45:669–678

    Article  Google Scholar 

  86. Nass C, Lombard M, Henriksen L, Steuer J (1995) Anthropocentrism and computers. Behav Inf Technol 14:229–238

    Article  Google Scholar 

  87. Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56:81–103

    Article  Google Scholar 

  88. Nass C, Moon Y, Fogg BJ, Reeves B, Dryer C (1995) Can computer personalities be human personalities? In: Proceedings of conference companion on human factors in computing systems, pp 228–229

  89. Nass C, Moon Y, Green N (1997) Are machines gender neutral? Gender-stereotypic responses to computers with voices. J Appl Soc Psychol 27:864–876

    Article  Google Scholar 

  90. Nass C, Reeves B, Leshner G (1996) Technology and roles: a tale of two TVs. J Commun 46:121–128

    Article  Google Scholar 

  91. Nass C, Steuer J, Tauber ER (1994) Computers are social actors. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 72–78

  92. *Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction. In: Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, pp 33–42

  93. *Nomura T, Kanda T (2015) Influences of evaluation and gaze from a robot and humans’ fear of negative evaluation on their preferences of the robot. Int J Soc Robot 7:155–164

    Article  Google Scholar 

  94. Nomura T, Kanda T, Suzuki T, Kato K (2009) Age differences and images of robots: social survey in Japan. Interact Stud 10:374–391

    Article  Google Scholar 

  95. Nomura T, Yamada S, Kanda T, Suzuki T, Kato K (2009) Influences of concerns toward emotional interaction into social acceptability of robots. In: 2009 4th ACM/IEEE international conference on human-robot interaction (HRI), pp 231–232

  96. Oh CS, Bailenson JN, Welch GF (2018) A systematic review of social presence: definition, antecedents, and implications. Front Robot AI 5:114

    Article  Google Scholar 

  97. Okumura Y, Kanakogi Y, Kanda T, Ishiguro H, Itakura S (2013) Infants understand the referential nature of human gaze but not robot gaze. J Exp Child Psychol 116:86–95

    Article  Google Scholar 

  98. *Park E, Lee J (2014) I am a warm robot: the effects of temperature in physical human-robot interaction. Robotica 32:133–142

    Article  Google Scholar 

  99. Perez S (2020). Duplex, Google’s conversational AI, has updated 3M+ business listings since pandemic. https://techcrunch.com/2020/10/15/duplex-googles-conversational-a-i-has-updated-3m-business-listings-since-pandemic/. Accessed 15 Oct 2020

  100. Pfeifer R, Scheier C (1999) Understanding intelligence. MIT Press, Cambridge

    Google Scholar 

  101. Rains SA, Matthes J, Palomares NA (2020) Communication science and meta-analysis: introduction to the special issue. Hum Commun Res 46:115–119

    Article  Google Scholar 

  102. Reeves B, Nass C (2000) Perceptual user interfaces: perceptual bandwidth. Commun ACM 43:65–70

    Article  Google Scholar 

  103. Reeves B, Nass C (2002) The media equation: How people treat computers, television, and new media like real people and places. CSLI Publications

    Google Scholar 

  104. Rosenthal R (1979) The file drawer problem and tolerance for null results. Psychol Bull 86(3):638–641

    Article  Google Scholar 

  105. Rosenthal R (1991) Meta-analytic procedures for social research. SAGE, Newbury Park

    Book  Google Scholar 

  106. Rosenthal R (1995) Writing meta-analytic reviews. Psychol Bull 118:183–192

    Article  Google Scholar 

  107. Rosenthal R, DiMatteo MR (2001) Meta-analysis: recent developments in quantitative methods for literature reviews. Annu Rev Psychol 52:59–82

    Article  Google Scholar 

  108. Sah YJ, Peng W (2015) Effects of visual and linguistic anthropomorphic cues on social perception, self-awareness, and information disclosure in a health website. Comput Hum Behav 45:392–401

    Article  Google Scholar 

  109. Salem M, Eyssel F, Rohlfing K, Kopp S, Joublin F (2013) To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability. Int J Soc Robot 5:313–323

    Article  Google Scholar 

  110. Santamaria T, Nathan-Roberts D (2017) Personality measurement and design in human-robot interaction: a systematic and critical review. In: Proceedings of the human factors and ergonomics society annual meeting, vol 61, pp 853–857

  111. Schmidt KL, Cohn JF (2001) Human facial expressions as adaptations: evolutionary questions in facial expression research. Am J Phys Anthropol 116:3–24

    Article  Google Scholar 

  112. Serrano JM, Iglesias J, Loeches A (1992) Visual discrimination and recognition of facial expressions of anger, fear, and surprise in 4-to 6-month-old infants. Dev Psychobiol J Int Soc Dev Psychobiol 25:411–425

    Article  Google Scholar 

  113. *Shamekhi A, Liao QV, Wang D, Bellamy RK, Erickson T (2018) Face Value? Exploring the effects of embodiment for a group facilitation agent. In: Proceedings of the 2018 CHI conference on human factors in computing systems, pp 1–13

  114. Sherry JL (2001) The effects of violent video games on aggression: a meta-analysis. Hum Commun Res 27:409–431

    Google Scholar 

  115. Shin DH, Choo H (2011) Modeling the acceptance of socially interactive robotics: Social presence in human–robot interaction. Interact Stud 12:430–460

    Article  Google Scholar 

  116. Spears R, Postmes T (2015) Group identity, social influence, and collective action online. Extensions and applications of the SIDE model. In: Sundar S (ed) The handbook of the psychology of communication technology. Wiley, New York, pp 23–46

    Chapter  Google Scholar 

  117. Stock-Homburg R, Hannig M, Lilienthal L (2020) Conversational flow in human-robot interactions at the workplace: comparing humanoid and android robots. In: Wagner AR et al (eds) Social Robotics. ICSR 2020. Lecture notes in computer science, vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_4

    Chapter  Google Scholar 

  118. Stower R, Calvo-Barajas N, Castellano G, Kappas A (2021) A meta-analysis on children’s trust in social robots. Int J Soc Robot 13:1979–2001

    Article  Google Scholar 

  119. *Straten CLV, Peter J, Kühne R, Barco A (2020) Transparency about a robot’s lack of human psychological capacities: effects on child-robot perception and relationship formation. ACM Trans Hum Robot Interact THRI 9:1–22

    Article  Google Scholar 

  120. Sundar SS (2020) Rise of machine agency: a framework for studying the psychology of human–AI interaction (HAII). J Comput Mediat Commun 25(1):74–88

    Article  Google Scholar 

  121. *Terzioğlu Y, Mutlu B, Şahin E (2020) Designing social cues for collaborative robots: the role of gaze and breathing in human-robot collaboration. In: Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, pp 343–357

  122. *Torre I, Goslin J, White L (2020) If your device could smile: people trust happy-sounding artificial agents more. Comput Hum Behav 105:106215

    Article  Google Scholar 

  123. Torre I, Goslin J, White L, Zanatto D (2018) Trust in artificial voices: a "congruency effect" of first impressions and behavioural experience. In: Proceedings of the technology, mind, and society, pp 1–6

  124. Treal T, Jackson PL, Meugnot A (2020) Combining trunk movement and facial expression enhances the perceived intensity and believability of an avatar’s pain expression. Comput Hum Behav 112:106451

    Article  Google Scholar 

  125. Tung FW, Deng YS (2007) Increasing social presence of social actors in e-learning environments: effects of dynamic and static emoticons on children. Displays 28:174–180

    Article  Google Scholar 

  126. *van den Brule R, Dotsch R, Bijlstra G, Wigboldus DH, Haselager P (2014) Do robot performance and behavioral style affect human trust? Int J Soc Robot 6:519–531

    Article  Google Scholar 

  127. *van Vugt HC, Konijn EA, Hoorn JF, Veldhuis J (2009) When too heavy is just fine: creating trustworthy e-health advisors. Int J Hum Comput Stud 67:571–583

    Article  Google Scholar 

  128. *Velner E, Boersma PP, de Graaf MM (2020) Intonation in robot speech: does it work the same as with people? In: Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, pp 569–578

  129. *de Visser EJ, Monfort SS, McKendrick R, Smith MAB, McKnight PE, Krueger F, Parasuraman R (2016) Almost human: anthropomorphism increases trust resilience in cognitive agents. J Exp Psychol Appl 22:331–349

    Article  Google Scholar 

  130. Walters ML, Lohse M, Hanheide M, Wrede B, Syrdal DS, Severinson-Eklundh K (2011) Evaluating the robot personality and verbal behavior of domestic robots using video-based studies. Adv Robot 25:2233–2254

    Article  Google Scholar 

  131. Wang LC, Baker J, Wagner JA, Wakefield K (2007) Can a retail web site be social? J Mark 71:143–157

    Article  Google Scholar 

  132. Wang B, Rau PLP (2019) Influence of embodiment and substrate of social robots on users’ decision-making and attitude. Int J Soc Robot 11:411–421

    Article  Google Scholar 

  133. *Weitz K, Schiller D, Schlagowski R, Huber T, André E (2019) "Do you trust me?" Increasing user-trust by integrating virtual agents in explainable AI interaction design. In: Proceedings of the 19th ACM international conference on intelligent virtual agents, pp 7–9

  134. Westerman D, Cross AC, Lindmark PG (2019) I believe in a thing called bot: perceptions of the humanness of “chatbots.” Commun Stud 70:295–312

    Article  Google Scholar 

  135. Woods S, Dautenhahn K, Kaouri C, te Boekhorst R, Koay KL, Walters ML (2007) Are robots like people? Relationships between participant and robot personality traits in human–robot interaction studies. Interact Stud 8:281–305

    Article  Google Scholar 

  136. *Xu K (2019) First encounter with robot Alpha: How individual differences interact with vocal and kinetic cues in users’ social responses. New Media Soc 21:2522–2547

    Article  Google Scholar 

  137. *Xu K (2020) Language, modality, and mobile media use experiences: social responses to smartphone cues in a task-oriented context. Telemat Inform 48:101344

    Article  Google Scholar 

  138. You S, Robert LP (2018) Human-robot similarity and willingness to work with a robotic co-worker. In: Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, pp 251–260

  139. Zhao S (2003) Toward a taxonomy of copresence. Presence Teleoperators Virtual Environ 12:445–455

    Article  Google Scholar 

  140. Zhao S (2006) Humanoid social robots as a medium of communication. New Media Soc 8:401–419

    Article  Google Scholar 

  141. Ziemke T (2003) What’s the thing called embodiment? In: Proceedings of the annual meeting of the cognitive science society, vol 25, pp 1305–1310

Download references

Funding

No funding was received to assist with the preparation for this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Xu.

Ethics declarations

Conflict of interest

There is no conflict of interest to disclose and this manuscript is not under consideration for publication at other outlets.

Ethical Approval

This is a meta-analysis study. The Research Ethics Committee of the university has confirmed that no ethical approval is required.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1a

See Table 6.

Table 6 Descriptive summary of sample studies on social presence

Appendix 1b

See Table 7.

Table 7 Descriptive summary of sample studies on trust

Appendix 2

3.1 Publication Bias

To identify a potential “file-drawer-problem” that may lead to the overestimation of the overall effect of pooled relationships, a common method to detect publication bias is Rosenthal’s [104] approach of fail-safe n calculation. This approach is to compare the value of “fail-safe n bias” and “fail-safe n.” If the fail-safe n is larger than the fail-safe n bias, then the meta-analysis features no publication bias. By contrast, if the fail-safe n is smaller than the fail-safe n bias, it indicates a potential bias of the meta-analysis. In this study, R package metaphor revealed that the fail-safe n is 4066. Under the zero-coded condition, the fail-safe n bias was 355. Under the max-coded condition, the fail-safe n bias was 295. In both conditions, the fail-safe n bias was smaller than the fail-safe n, meaning that this meta-analysis presented no publication bias (see funnel plots).

Funnel plot for zero-coded meta-analysis

figure a

Funnel plot for max-coded meta-analysis

figure b

Appendix 3

4.1 Forest Plot for Individual Effect Sizes

The effects of social cues on social presence (zero-coded)

figure c

The effects of social cues on social presence (max-coded)

figure d

The effects of social cues on trust (zero-coded)

figure e

The effects of social cues on trust (max-coded)

figure f

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, K., Chen, M. & You, L. The Hitchhiker’s Guide to a Credible and Socially Present Robot: Two Meta-Analyses of the Power of Social Cues in Human–Robot Interaction. Int J of Soc Robotics 15, 269–295 (2023). https://doi.org/10.1007/s12369-022-00961-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-022-00961-3

Keywords

Navigation