skip to main content
10.1145/3597512.3597526acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
extended-abstract

Embodied Conversational Agents: Trust, Deception and the Suspension of Disbelief

Published:11 July 2023Publication History

ABSTRACT

Building trust is often cited as important for the success of a service or application. When part of the system is an embodied conversational agent (ECA), the design of the ECA has an impact on a user’s trust. In this paper we discuss whether designing an ECA for trust also means designing an ECA to give a false impression of sentience, whether such an implicit deception can undermine a sense of trust, and the impact such a design process may have on a vulnerable user group, in this case users living with dementia. We conclude by arguing that current trust metrics ignore the importance of a willing suspension of disbelief and its role in social computing.

References

  1. M.P. Aylett, R. Gomez, E. Sandry, and S. Sabanovic. 2023. Unsocial Robots: How Western Culture Dooms Consumer Social Robots to a Society of One. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.Google ScholarGoogle Scholar
  2. Frederic Charles Bartlett. 1995. Remembering: A study in experimental and social psychology. Cambridge university press.Google ScholarGoogle ScholarCross RefCross Ref
  3. Timothy Bickmore, Daniel Schulman, and Langxuan Yin. 2009. Engagement vs. deceit: Virtual humans with human autobiographies. In Intelligent Virtual Agents: 9th International Conference, IVA 2009 Amsterdam, The Netherlands, September 14-16, 2009 Proceedings 9. Springer, 6–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Cyril Brom, Jiří Lukavskỳ, and Rudolf Kadlec. 2010. Episodic memory for human-like agents and human-like agents for episodic memory. International Journal of Machine Consciousness 2, 02 (2010), 227–244.Google ScholarGoogle ScholarCross RefCross Ref
  5. Robert N Butler. 1963. The life review: An interpretation of reminiscence in the aged. Psychiatry 26, 1 (1963), 65–76.Google ScholarGoogle ScholarCross RefCross Ref
  6. J Campos. 2010. MAY: my Memories Are Yours. An interactive companion that saves the users memories. Ph. D. Dissertation. Master thesis, Instituto Superior Técnico.Google ScholarGoogle Scholar
  7. Herbert H Clark and Kerstin Fischer. 2022. Social robots as depictions of social agents. Behavioral and Brain Sciences (2022), 1–33.Google ScholarGoogle Scholar
  8. Malcolm Fisk. 2022. AI, Limitations and Illusions - Towards a Symbiotic or Dystopic Society?https://iros2022.org/program/special-forum/ethics-forum/Google ScholarGoogle Scholar
  9. Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics 4, 1 (2000), 53–71.Google ScholarGoogle ScholarCross RefCross Ref
  10. Moritz Körber. 2019. Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics 20. Springer, 13–30.Google ScholarGoogle ScholarCross RefCross Ref
  11. Iolanda Leite, Carlos Martinho, and Ana Paiva. 2013. Social robots for long-term interaction: a survey. International Journal of Social Robotics 5 (2013), 291–308.Google ScholarGoogle ScholarCross RefCross Ref
  12. Mei Yii Lim. 2012. Memory models for intelligent social companions. Human-Computer Interaction: The Agency Perspective (2012), 241–262.Google ScholarGoogle Scholar
  13. Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Katherine Nelson. 1993. The psychological and social origins of autobiographical memory. Psychological science 4, 1 (1993), 7–14.Google ScholarGoogle Scholar
  15. Caroline Rizzi, Colin G Johnson, Fabio Fabris, and Patricia A Vargas. 2017. A situation-aware fear learning (safel) model for robots. Neurocomputing 221 (2017), 32–47.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Embodied Conversational Agents: Trust, Deception and the Suspension of Disbelief

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          TAS '23: Proceedings of the First International Symposium on Trustworthy Autonomous Systems
          July 2023
          426 pages
          ISBN:9798400707346
          DOI:10.1145/3597512

          Copyright © 2023 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 July 2023

          Check for updates

          Qualifiers

          • extended-abstract
          • Research
          • Refereed limited
        • Article Metrics

          • Downloads (Last 12 months)72
          • Downloads (Last 6 weeks)5

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format