ABSTRACT
Building trust is often cited as important for the success of a service or application. When part of the system is an embodied conversational agent (ECA), the design of the ECA has an impact on a user’s trust. In this paper we discuss whether designing an ECA for trust also means designing an ECA to give a false impression of sentience, whether such an implicit deception can undermine a sense of trust, and the impact such a design process may have on a vulnerable user group, in this case users living with dementia. We conclude by arguing that current trust metrics ignore the importance of a willing suspension of disbelief and its role in social computing.
- M.P. Aylett, R. Gomez, E. Sandry, and S. Sabanovic. 2023. Unsocial Robots: How Western Culture Dooms Consumer Social Robots to a Society of One. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.Google Scholar
- Frederic Charles Bartlett. 1995. Remembering: A study in experimental and social psychology. Cambridge university press.Google ScholarCross Ref
- Timothy Bickmore, Daniel Schulman, and Langxuan Yin. 2009. Engagement vs. deceit: Virtual humans with human autobiographies. In Intelligent Virtual Agents: 9th International Conference, IVA 2009 Amsterdam, The Netherlands, September 14-16, 2009 Proceedings 9. Springer, 6–19.Google ScholarDigital Library
- Cyril Brom, Jiří Lukavskỳ, and Rudolf Kadlec. 2010. Episodic memory for human-like agents and human-like agents for episodic memory. International Journal of Machine Consciousness 2, 02 (2010), 227–244.Google ScholarCross Ref
- Robert N Butler. 1963. The life review: An interpretation of reminiscence in the aged. Psychiatry 26, 1 (1963), 65–76.Google ScholarCross Ref
- J Campos. 2010. MAY: my Memories Are Yours. An interactive companion that saves the users memories. Ph. D. Dissertation. Master thesis, Instituto Superior Técnico.Google Scholar
- Herbert H Clark and Kerstin Fischer. 2022. Social robots as depictions of social agents. Behavioral and Brain Sciences (2022), 1–33.Google Scholar
- Malcolm Fisk. 2022. AI, Limitations and Illusions - Towards a Symbiotic or Dystopic Society?https://iros2022.org/program/special-forum/ethics-forum/Google Scholar
- Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics 4, 1 (2000), 53–71.Google ScholarCross Ref
- Moritz Körber. 2019. Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics 20. Springer, 13–30.Google ScholarCross Ref
- Iolanda Leite, Carlos Martinho, and Ana Paiva. 2013. Social robots for long-term interaction: a survey. International Journal of Social Robotics 5 (2013), 291–308.Google ScholarCross Ref
- Mei Yii Lim. 2012. Memory models for intelligent social companions. Human-Computer Interaction: The Agency Perspective (2012), 241–262.Google Scholar
- Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78.Google ScholarDigital Library
- Katherine Nelson. 1993. The psychological and social origins of autobiographical memory. Psychological science 4, 1 (1993), 7–14.Google Scholar
- Caroline Rizzi, Colin G Johnson, Fabio Fabris, and Patricia A Vargas. 2017. A situation-aware fear learning (safel) model for robots. Neurocomputing 221 (2017), 32–47.Google ScholarDigital Library
Index Terms
- Embodied Conversational Agents: Trust, Deception and the Suspension of Disbelief
Recommendations
Trustworthy Embodied Conversational Agents for Healthcare: A Design Exploration of Embodied Conversational Agents for the periconception period at Erasmus MC
CUI '23: Proceedings of the 5th International Conference on Conversational User InterfacesThis paper explores the potential implications of embodied conversational agents (ECAs) in healthcare, focusing on the impact of appearance and conversation style on trustworthiness. We conducted a Research through Design investigation of ECAs for ...
Trust and deception with high stakes: Evidence from the friend or foe dataset
AbstractMany social interactions rely on the premise of mutual trust, but deception violates trust and poses risk. Empirically examining trust and deception, particularly in high-stakes situations, is challenging but essential for improving ...
Highlights- Lab studies on trust and deception have realism and generalizability concerns.
- ...
Embodied conversational agents: computing and rendering realistic gaze patterns
PCM'06: Proceedings of the 7th Pacific Rim conference on Advances in Multimedia Information ProcessingWe describe here our efforts for modeling multimodal signals exchanged by interlocutors when interacting face-to-face. This data is then used to control embodied conversational agents able to engage into a realistic face-to-face interaction with human ...
Comments