Skip to main content

Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned

  • Chapter
  • First Online:

Abstract

The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  • Axelrod R (1984) The evolution of cooperation. Basic, New York

    MATH  Google Scholar 

  • Berinsky AJ, Huber GA, Lenz GS (2012) Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Polit Anal 20(3):351–368

    Article  Google Scholar 

  • Bethel CL, Murphy RR (2008) Survey of non-facial/non-verbal affective expressions for appearance-constrained robots. IEEE Trans Syst Man Cybernet Part C 38(1):83–92

    Article  Google Scholar 

  • Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5

    Article  Google Scholar 

  • Carlson MS, Desai M, Drury JL, Kwak H, Yanco HA (2014) Identifying factors that influence trust in automated cars and medical diagnosis systems. In: Proceedings of the AAAI spring symposium on the intersection of robust intelligence and trust in autonomous systems, Palo Alto

    Google Scholar 

  • Castelfranch C, Falcone R (2010) Trust theory: a socio-cognitive and computational model. Wiley, New York

    Book  Google Scholar 

  • Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Proceedings of the 8th ACM/IEEE international conference on human-robot interaction, Tokyo, pp 251–258

    Google Scholar 

  • Dirks KT, Ferrin DL (2002) Trust in leadership: meta-analytic findings and implications for research and practice. J Appl Psychol 87(4):611–628

    Article  Google Scholar 

  • Duncan BA, Murphy RR (2013) Comfortable approach distance with small unmanned aerial vehicles. RO-MAN, 2013 IEEE, pp 786–792

    Google Scholar 

  • Gambetta D (1990) Can we trust trust? In: Gambetta D (ed) Trust, making and breaking cooperative relationships. Basil Blackwell, New York

    Google Scholar 

  • Gao F, Clare AS, Macbeth JC, Cummings ML (2013) Modeling the impact of operator trust on performance in multiple robot control. AAAI spring symposium: trust and autonomous systems, Palo Alto

    Google Scholar 

  • Gehlbach H, Brinkworth ME (2011) Measure twice, cut down error: a process for enhancing the validity of survey scales. Rev Gen Psychol 15(4):380–387

    Article  Google Scholar 

  • Gosling SD, Vazire S, Srivastava S, John OP (2004) Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. Am Psychol 59(2):93–104

    Article  Google Scholar 

  • Hancock PA, Billings DR, Schaefer KE, Chen JYC, Visser EJD, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53(5):517–527

    Article  Google Scholar 

  • Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. Intell Syst 28(1):84–88

    Article  Google Scholar 

  • Horton JJ, Chilton LB (2010) The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM conference on electronic commerce, pp 209–218

    Google Scholar 

  • Kelley HH, Thibaut JW (1978) Interpersonal relations: a theory of interdependence. Wiley, New York

    Google Scholar 

  • King-Casas B, Tomlin D, Anen C, Camerer CF, Quartz SR, Montague PR (2005) Getting to know you: reputation and trust in two-person economic exchange. Science 308:78–83

    Article  Google Scholar 

  • Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI conference on human factors in computing systems

    Google Scholar 

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80

    Article  Google Scholar 

  • Murphy RR (2004) Human-robot interaction in rescue robotics. IEEE Trans Syst Man Cybernet Part C 34(2):138–153

    Article  Google Scholar 

  • Paolacci G, Chandler J, Ipeirotis PG (2010) Running experiments on Amazon mechanical turk. Judgment Decision Making 5(5):411–419

    Google Scholar 

  • Robinette P, Howard AM (2011) Incorporating a model of human panic behavior for robotic-based emergency evacuation. RO-MAN, 2011 IEEE, pp 47–52

    Google Scholar 

  • Robinette P, Howard AM (2012) Trust in emergency evacuation robots. In: 2012 IEEE international symposium on safety, security, and rescue robotics (SSRR), pp 1–6

    Google Scholar 

  • Robinette P, Vela PA, Howard AM (2012) Information propagation applied to robot-assisted evacuation. In: 2012 IEEE international conference on robotics and automation (ICRA), pp 856–861

    Google Scholar 

  • Robinette P, Wagner AR, Howard A (2013) Building and maintaining trust between humans and guidance robots in an emergency. In: AAAI spring symposium, Stanford University, Palo Alto, pp 78–83

    Google Scholar 

  • Robinette P, Wagner AR, Howard A (2014a) Assessment of robot guidance modalities conveying instructions to humans in emergency situations. In: Proceedings of the IEEE international symposium on robot and human interactive communication, RO-MAN 14, Edinburgh

    Google Scholar 

  • Robinette P, Wagner AR, Howard A (2014b) Modeling human-robot trust in emergencies. In: AAAI spring symposium, Stanford University

    Google Scholar 

  • Robinette P, Wagner AR, Howard AM (2015) The effect of robot performance on human-robot trust in time-critical situations. Tech Rep. GT-IRIM-HumAns-2015- 001, Georgia Institute of Technology, Institute for Robotics and Intelligent Machines

    Google Scholar 

  • Sabater J, Sierra C (2005) Review of computational trust and reputation models. Artif Intell Rev 24:33–60

    Article  MATH  Google Scholar 

  • Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M (2006) Common metrics for human-robot interaction. In: Proceedings of the first ACM SIGCHI/SIGART conference on human-robot interaction, pp 33–40

    Google Scholar 

  • Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185(4157):1124–1131

    Article  Google Scholar 

  • Wagner AR (2009a) The role of trust and relationships in human-robot social interaction. Ph.D. Dissertation. School of Interactive Computing, Georgia Institute of Technology, Atlanta

    Google Scholar 

  • Wagner AR (2009b) Creating and using matrix representations of social interaction. In: Human-robot interaction (HRI), San Diego, pp 125–132

    Google Scholar 

  • Wagner AR (2012) Using cluster-based stereotyping to foster human-robot cooperation. In: Proceedings of IEEE international conference on intelligent robots and systems, IROS 2012, Villamura, pp 1615–1622

    Google Scholar 

  • Wagner AR, Robinette P (2015) Towards robots that trust: human subject validation of the situational conditions for trust. Interact Stud 16:89–117

    Article  Google Scholar 

Download references

Acknowledgement

This work was funded by award #FA95501310169 from the Air Force Office of Sponsored Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Robinette .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media (outside the USA)

About this chapter

Cite this chapter

Robinette, P., Wagner, A.R., Howard, A.M. (2016). Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned. In: Mittu, R., Sofge, D., Wagner, A., Lawless, W. (eds) Robust Intelligence and Trust in Autonomous Systems. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7668-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7668-0_8

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4899-7666-6

  • Online ISBN: 978-1-4899-7668-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics