Skip to main content

Two-Way Human-Agent Trust Relationships in Adaptive Cognitive Agent, Adaptive Tasking Scenarios: Literature Metadata Analysis

  • Conference paper
  • First Online:
Human-Computer Interaction. Theory, Methods and Tools (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12762))

Included in the following conference series:

Abstract

Rapid development of autonomous systems, that act independently and are deeply integrated with humans, necessitates trust-based cooperation and collaboration between these agents and the humans they interact with. A greater understanding of two-way trust between humans and artificial agents is a topic of interest for situations when the human makes mistakes or an anomalous situation such as an enemy combatant taking control of friendly AI. The purpose of this paper is to review the state-of-the-art regarding two-way trust research in Human-Adaptive Agent teams. A systematic review of academic and technical literature from the last ten years (2010–2020) was performed to collect metadata for analysis and discussion. Details of the literature review to include search databases, search terms, and inclusion-exclusion filtering is provided. A metadata analysis is discussed comparing measurements of human trust and agent trust; adaptive-scenario and adaptive-agent mechanisms; type of collaborative human-agent tasking; and level of automation and embodiment of the agent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barnes, M.J., Chen, J.Y.C., Hill, S.: Humans and Autonomy: Implications of Shared Decision-Making for Military Operations. p. 42 (2017)

    Google Scholar 

  2. Bradshaw, J.M., et al.: From tools to teammates: joint activity in human-agent-robot teams. In: Kurosu, M. (ed.) HCD 2009. LNCS, vol. 5619, pp. 935–944. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02806-9_107

    Chapter  Google Scholar 

  3. Singer, P.W.: Wired for War: The Future of Military Robots, Brookings (2009). https://www.brookings.edu/opinions/wired-for-war-the-future-of-military-robots/. Accessed 10 February 2021

  4. Han, S., Yang, H.: Understanding adoption of intelligent personal assistants: a parasocial relationship perspective. Ind. Manag. Data Syst. 118(3), 618–636 (2018). https://doi.org/10.1108/IMDS-05-2017-0214

    Article  Google Scholar 

  5. Yang, R., Newman, M.W.: Living with an intelligent thermostat: advanced control for heating and cooling systems. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, New York, NY, USA, pp. 1102–1107, September 2012. https://doi.org/10.1145/2370216.2370449.

  6. Rupp, J.D., King, A.G.: Autonomous Driving - A Practical Roadmap, October 2010. https://doi.org/10.4271/2010-01-2335

  7. Kasparov, G.: The Chess Master and the Computer. The New York Review of Books, February 2010

    Google Scholar 

  8. Hancock, P.A.: Imposing limits on autonomous systems. Ergonomics 60(2), 284–291 (2017). https://doi.org/10.1080/00140139.2016.1190035

    Article  Google Scholar 

  9. Parasuraman, R., Bahri, T., Deaton, J., Morrison, J., Barnes, M.: Theory and design of adaptive automation in aviation systems, July 1992

    Google Scholar 

  10. Wickens, C.D., Dixon, S.R.: The benefits of imperfect diagnostic automation: a synthesis of the literature. Theor. Issues Ergon. Sci. 8(3), 201–212 (2007). https://doi.org/10.1080/14639220500370105

    Article  Google Scholar 

  11. Schwab, K.: The Fourth Industrial Revolution: What it means and how to respond, Science & Technology. https://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution. Dec 2015

  12. Hidalgo, M.: An Approach to Modeling Simulated Military Human-agent Teaming, Electronic Theses and Dissertations, January 2020. https://stars.library.ucf.edu/etd2020/439

  13. Grosz, B.J.: Collaborative Systems, AI Magazine, pp. 67–85 (1996)

    Google Scholar 

  14. Russell, S.J., Norvig, P., Davis, E.: Artificial Intelligence: a Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River (2010)

    MATH  Google Scholar 

  15. Sukthankar, D.G., Sycara, K., Giampapa, J.A., Burnett, C.: A model of human teamwork for agent-assisted search operations. Carnegie Mellon University, p. 22 (2008)

    Google Scholar 

  16. Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Salas, E., Fiore, S.M., Letsky, M.P. (eds.) Theories of team cognition: Cross-disciplinary perspectives, pp. 313–343. Routledge, Taylor & Francis Group (2012)

    Google Scholar 

  17. Bradshaw, J.M., Hoffman, R.R., Woods, D.D., Johnson, M.: The Seven Deadly Myths of ‘Autonomous Systems.’ IEEE Intell. Syst. 28(3), 54–61 (2013). https://doi.org/10.1109/MIS.2013.70

    Article  Google Scholar 

  18. Sheridan, T.B.: Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: distinctions and modes of adaptation. IEEE Trans. Syst., Man, Cybern. - Part A: Syst. Hum. 41(4), 662–667 (2011). https://doi.org/10.1109/TSMCA.2010.2093888

    Article  Google Scholar 

  19. Sheridan, T.B., Verplank, W.L.: Human and computer control of undersea Teleoperators, massachusetts inst of tech Cambridge man-machine systems lab, Jul. 1978. https://apps.dtic.mil/sti/citations/ADA057655. Accessed 18 Nov 2020

  20. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst., Man, Cybern. - A 30(3), 286–297 (2000). https://doi.org/10.1109/3468.844354

    Article  Google Scholar 

  21. Scerbo, M.W.: Adaptive Automation, International Encyclopedia of Human Factors. pp. 1077–1079 (2011)

    Google Scholar 

  22. Barnes, M., Elliott, L.R., Wright, J., Scharine, A., Chen, J.: Human–Robot Interaction Design Research: From Teleoperations to, p. 54

    Google Scholar 

  23. Chen, J.Y.C.: Human-autonomy teaming in military settings. Theor. Issues Ergon. Sci. 19(3), 255–258 (2018). https://doi.org/10.1080/1463922X.2017.1397229

    Article  Google Scholar 

  24. Demir, M., McNeese, N.J., Cooke, N.J., Ball, J.T., Myers, C., Frieman, M.: Synthetic teammate communication and coordination with humans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59(1), pp. 951–955, September 2015. https://doi.org/10.1177/1541931215591275

  25. Demir, M., McNeese, N.J., Cooke, N.J.: The Evolution of Human-Autonomy Teams in Remotely Piloted Aircraft Systems Operations. Front. Commun. 4, 50 (2019). https://doi.org/10.3389/fcomm.2019.00050

    Article  Google Scholar 

  26. McNeese, N.J., Demir, M., Cooke, N.J., Myers, C.: Teaming with a synthetic teammate: insights into human-autonomy teaming. Hum Factors 60(2), 262–273 (2018). https://doi.org/10.1177/0018720817743223

    Article  Google Scholar 

  27. Salas, E., Sims, D.E., Burke, C.S.: Is there a ‘big five’ in teamwork? Small Group Res. 36(5), 555–599 (2005). https://doi.org/10.1177/1046496405277134

    Article  Google Scholar 

  28. Teo, G., Reinerman-Jones, L., Matthews, G., Szalma, J., Jentsch, F., Hancock, P.: Enhancing the effectiveness of human-robot teaming with a closed-loop system. Appl. Ergon. 67, 91–103 (2018). https://doi.org/10.1016/j.apergo.2017.07.007

    Article  Google Scholar 

  29. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995). https://doi.org/10.2307/258792

    Article  Google Scholar 

  30. Feitosa, J., Grossman, R., Kramer, W.S., Salas, E.: Measuring team trust: a critical and meta-analytical review. J. Organ. Behav. 41(5), 479–501 (2020). https://doi.org/10.1002/job.2436

    Article  Google Scholar 

  31. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum Factors 46(1), 50–80 (2004). https://doi.org/10.1518/hfes.46.1.50_30392

    Article  MathSciNet  Google Scholar 

  32. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53(5), 517–527 (2011). https://doi.org/10.1177/0018720811417254

    Article  Google Scholar 

  33. Khodyakov, D.: Trust as a process: a three-dimensional approach. Soc.-J. Brit. Soc. Assoc. – Soc. 41, 115–132 (2007). https://doi.org/10.1177/0038038507072285

    Article  Google Scholar 

  34. Victor, P., Verbiest, N., Cornelis, C., Cock, M.D.: Enhancing the trust-based recommendation process with explicit distrust. ACM Trans. Web, vol. 7(2), p. 19

    Google Scholar 

  35. de Visser, E.J., Pak, R., Shaw, T.H.: From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics 61(10), 1409–1427 (2018). https://doi.org/10.1080/00140139.2018.1457725

    Article  Google Scholar 

  36. Dirks, K.T., Lewicki, R.J., Zaheer, A.: Repairing relationships within and between organizations: building a conceptual foundation. Acad. Manage. Rev. 34, 68–84 (2009)

    Article  Google Scholar 

  37. Kramer, R.M., Lewicki, R.J.: Repairing and enhancing trust: approaches to reducing organizational trust deficits. Annals 4(1), 245–277 (2010). https://doi.org/10.5465/19416520.2010.487403

    Article  Google Scholar 

  38. McGuirl, J.M., Sarter, N.B.: Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum Factors 48(4), 656–665 (2006). https://doi.org/10.1518/001872006779166334

    Article  Google Scholar 

  39. de Visser, E.J., et al.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 56(1), pp. 263–267, September 2012. https://doi.org/10.1177/1071181312561062

  40. Xu, A., Dudek, G.: Towards efficient collaborations with trust-seeking adaptive robots. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, Portland Oregon USA, pp. 221–222 March 2015. https://doi.org/10.1145/2701973.2702711

  41. Chen, S.I., Visser, T.A.W., Huf, S., Loft, S.: Optimizing the balance between task automation and human manual control in simulated submarine track management. J. Exp. Psychol. Appl. 23(3), 240–262 (2017). https://doi.org/10.1037/xap0000126

    Article  Google Scholar 

  42. Szalma, J.L., Taylor, G.S.: Individual differences in response to automation: the five factor model of personality. J. Exp. Psychol. Appl. 17(2), 71–96 (2011). https://doi.org/10.1037/a0024170

    Article  Google Scholar 

  43. Dubois, C., Le Ny, J.: Adaptive task allocation in human-machine teams with trust and workload cognitive models. In: 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, pp. 3241–3246, October 2020. https://doi.org/10.1109/SMC42975.2020.9283461

  44. de Visser, E., Parasuraman, R.: Adaptive Aiding of Human-Robot Teaming: Effects of Imperfect Automation on Performance, Trust, and Workload. J. Cogn. Eng. Decis. Making 5(2), 209–231 (2011). https://doi.org/10.1177/1555343411410160

    Article  Google Scholar 

  45. Muslim, H., Itoh, M.: Trust and acceptance of adaptive and conventional collision avoidance systems. IFAC-PapersOnLine 52(19), 55–60 (2019). https://doi.org/10.1016/j.ifacol.2019.12.086

    Article  Google Scholar 

  46. Merritt, S.M.: Affective processes in human-automation interactions. Hum Factors 53(4), 356–370 (2011). https://doi.org/10.1177/0018720811411912

    Article  Google Scholar 

  47. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Hum Comput Stud. 40(1), 153–184 (1994). https://doi.org/10.1006/ijhc.1994.1007

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Daniel Kennedy or Maartje Hidalgo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kennedy, D., Hidalgo, M. (2021). Two-Way Human-Agent Trust Relationships in Adaptive Cognitive Agent, Adaptive Tasking Scenarios: Literature Metadata Analysis. In: Kurosu, M. (eds) Human-Computer Interaction. Theory, Methods and Tools. HCII 2021. Lecture Notes in Computer Science(), vol 12762. Springer, Cham. https://doi.org/10.1007/978-3-030-78462-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78462-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78461-4

  • Online ISBN: 978-3-030-78462-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics