ABSTRACT
As robots become more integrated into humans' daily activities, it is essential to understand how human decision varies during co-learning with robots in real-world scenarios. Despite great advances in developing humanoid robots, which aims to foster a seamless collaborative world where humans and robots coexist, a gap remains in the social bond between humans and robots, particularly in tasks demanding optimal teamwork. In alignment with current pioneering efforts in the human-robot collaboration field, this paper presents an experimental study leading to a rationale analysis and classification of human behavioral dynamics during a joint collaborative pick-and-place task with a robotic arm. Our post-experimental analysis categorized human behavioral dynamics into three distinct broad categories, which are "strategic explorers and decoders", "reactive navigators and dynamic responders", and "score maximizers and ideal collaborators". We provide in-depth analysis for each group, exploring potential reasons for their observed behavioral patterns and irrational decisions substantiated by intuitions from psychological and behavioral game theory, including concepts of false belief and strategy development.
Supplemental Material
- Dario Antonelli and Dorota Stadnicka. 2019. Predicting and preventing mistakes in human-robot collaborative assembly. IFAC-PapersOnLine, Vol. 52, 13 (2019), 743--748. https://doi.org/10.1016/j.ifacol.2019.11.204Google ScholarCross Ref
- Anja Austermann and Seiji Yamada. 2009. Learning to understand parameterized commands through a human-robot training task. In The 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009). Toyama, Japan, 757--762. https://doi.org/10.1109/ROMAN.2009.5326220Google ScholarCross Ref
- Giulia Calabretta, Gerda Gemser, and Nachoem M. Wijnberg. 2017. The Interplay between Intuition and Rationality in Strategic Decision Making: A Paradox Perspective. Organization Studies, Vol. 38, 3--4 (2017), 365--401. https://doi.org/10.1177/0170840616655483Google ScholarCross Ref
- Gianluca Calcagni, Ernesto Caballero-Garrido, and Ricardo Pellón. 2020. Behavior Stability and Individual Differences in Pavlovian Extended Conditioning. Frontiers in Psychology, Vol. 11 (2020), 612. https://doi.org/10.3389/fpsyg.2020.00612Google ScholarCross Ref
- Li-Keng Cheng. 2023. Effects of service robots' anthropomorphism on consumers' attribution toward and forgiveness of service failure. Journal of Consumer Behaviour, Vol. 22, 1 (2023), 67--81. https://doi.org/10.1002/cb.2112Google ScholarCross Ref
- Andrew Farley, Jie Wang, and Joshua A. Marshall. 2022. How to pick a mobile robot simulator: A quantitative comparison of CoppeliaSim, Gazebo, MORSE and Webots with a focus on accuracy of motion. Simulation Modelling Practice and Theory, Vol. 120 (2022), 102629. https://doi.org/10.1016/j.simpat.2022.102629Google ScholarCross Ref
- Piotr Fratczak, Yee Mey Goh, Peter Kinnell, Andrea Soltoggio, and Laura Justham. 2019. Understanding Human Behaviour in Industrial Human-Robot Interaction by Means of Virtual Reality. In Proceedings of the Halfway to the Future Symposium 2019. 1--7. https://doi.org/10.1145/3363384.3363403Google ScholarDigital Library
- Samuel J. Gershman. 2015. A unifying probabilistic view of associative learning. PLoS Computational Biology, Vol. 11, 11 (2015), e1004567. https://doi.org/10.1371/journal.pcbi.1004567Google ScholarCross Ref
- Adrien Gregorj, Zeynep Yücel, Francesco Zanlungo, Claudio Feliciani, and Takayuki Kanda. 2023. Social aspects of collision avoidance: a detailed analysis of two-person groups and individual pedestrians. Scientific Reports, Vol. 13 (2023), 5756. https://doi.org/10.1038/s41598-023--32883-zGoogle ScholarCross Ref
- Erin Hedlund-Botti and Matthew C. Gombolay. 2023. Investigating Learning from Demonstration in Imperfect and Real World Scenarios. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI '23). 769--771. https://doi.org/10.1145/3568294.3579980Google ScholarDigital Library
- Shanee Honig and Tal Oron-Gilad. 2018. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, Vol. 9 (2018). https://doi.org/10.3389/fpsyg.2018.00861Google ScholarCross Ref
- Baris Korkmaz. 2011. Theory of Mind and Neurodevelopmental Disorders of Childhood. Pediatric Research, Vol. 69 (2011), 101--108. https://doi.org/10.1203/PDR.0b013e318212c177Google ScholarCross Ref
- Benedikt Leichtmann, Verena Nitsch, and Martina Mara. 2022. Crisis Ahead? Why Human-Robot Interaction User Studies May Have Replicability Problems and Directions for Improvement. Frontiers in Robotics and AI, Vol. 9 (2022). https://doi.org/10.3389/frobt.2022.838116Google ScholarCross Ref
- Dylan P. Losey and Dorsa Sadigh. 2019. Robots that Take Advantage of Human Trust. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Macau, China, 7001--7008. https://doi.org/10.1109/IROS40897.2019.8968564Google ScholarDigital Library
- Robert Lowe, Alexander Almér, and Christian Balkenius. 2019. Bridging Connectionism and Relational Cognition through Bi-directional Affective-Associative Processing. Open Information Science, Vol. 3, 1 (2019), 235--260. https://doi.org/10.1515/opis-2019-0017Google ScholarCross Ref
- André Luzardo, Eduardo Alonso, and Esther Mondragón. 2017. A Rescorla-Wagner Drift-Diffusion Model of Conditioning and Timing. PLoS Computational Biology, Vol. 13, 11 (2017). https://doi.org/10.1371/journal.pcbi.1005796Google ScholarCross Ref
- Christoforos Mavrogiannis, Francesca Baldini, Allan Wang, Dapeng Zhao, Pete Trautman, Aaron Steinfeld, and Jean Oh. 2023. Core Challenges of Social Robot Navigation: A Survey. Journal of Human-Robot Interaction, Vol. 12, 3 (2023), Article 36. https://doi.org/10.1145/3583741Google ScholarDigital Library
- Ralph R. Miller, Robert C. Barnet, r, and Nicholas J. Grahame. 1995. Assessment of the Rescorla-Wagner model. Psychological Bulletin, Vol. 117, 3 (1995), 363--386. https://doi.org/10.1037/0033--2909.117.3.363Google ScholarCross Ref
- Debasmita Mukherjee, Kashish Gupta, and Homayoun Najjaran. 2022. A Critical Analysis of Industrial Human-Robot Communication and Its Quest for Naturalness Through the Lens of Complexity Theory. Frontiers in Robotics and AI, Vol. 9 (2022). https://doi.org/10.3389/frobt.2022.870477Google ScholarCross Ref
- Robert A. Recorla and Allan R. Wagner. 1972. A Theory of Pavlovian Conditioning: Variations in the Effectiveness of Reinforcement and Nonreinforcement. Appleton-Century-Crofts, New York. 64--99 pages.Google Scholar
- Herbert A. Simon. 1993. Decision Making: Rational, Nonrational, and Irrational. Educational Administration Quarterly, Vol. 29, 3 (1993), 392--411. https://doi.org/10.1177/0013161X93029003009Google ScholarCross Ref
- Christina Soyoung Song and Youn-Kyung Kim. 2022. The role of the human-robot interaction in consumers' acceptance of humanoid retail service robots. Journal of Business Research, Vol. 146 (2022), 489--503. https://doi.org/10.1016/j.jbusres.2022.03.087Google ScholarCross Ref
- Nicolas Spatola and Thierry Chaminade. 2022. Precuneus brain response changes differently during human--robot and human--human dyadic social interaction. Scientific Reports, Vol. 12 (2022), 14794. https://doi.org/10.1038/s41598-022--14207--9Google ScholarCross Ref
- Pete Trautman. 2017. Breaking the Human-Robot Deadlock: Surpassing Shared Control Performance Limits with Sparse Human-Robot Interaction. https://doi.org/10.15607/RSS.2017.XIII.005Google ScholarCross Ref
- Joanna Oi-Yue Yau and Gavan P. McNally. 2023. The Rescorla-Wagner model, prediction error, and fear learning. Neurobiology of Learning and Memory, Vol. 203 (2023), 107799. https://doi.org/10.1016/j.nlm.2023.107799Google ScholarCross Ref
- Wenfeng Yi, Wenhan Wu, Xiaolu Wang, and Xiaoping Zheng. 2023. Modeling the Mutual Anticipation in Human Crowds With Attention Distractions. IEEE Transactions on Intelligent Transportation Systems, Vol. 24, 9 (2023), 10108--10117. https://doi.org/10.1109/TITS.2023.3268315Google ScholarDigital Library
- Carol Young, Ningshi Yao, and Fumin Zhang. 2019. Avoiding Chatter in an Online Co-Learning Algorithm Predicting Human Intention. In 2019 IEEE 58th Conference on Decision and Control (CDC). Nice, France, 6504--6509. https://doi.org/10.1109/CDC40024.2019.9030038Google ScholarDigital Library
Index Terms
- Unveiling the Dynamics of Human Decision-Making: From Strategies to False Beliefs in Collaborative Human-Robot Co-Learning Tasks
Recommendations
Human-Robot Co-Learning for Fluent Collaborations
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot InteractionA team develops competency by progressive mutual adaptation and learning, a process we call co-learning. In human teams, partners naturally adapt to each other and learn while collaborating. This is not self-evident in human-robot teams. There is a need ...
Human Intention Prediction in Human-Robot Collaborative Tasks
HRI '18: Companion of the 2018 ACM/IEEE International Conference on Human-Robot InteractionEnabling the robot to predict human intentions in human-robot collaborative hand-over tasks is a challenging but important issue to address. We develop a novel and effective teaching-learning-prediction (TLP) model for the robot to online learn from ...
Transitional or partnership human and robot collaboration for automotive assembly
PerMIS '10: Proceedings of the 10th Performance Metrics for Intelligent Systems WorkshopTraditional automotive manufacturing tasks that are automated to date by industrial robots utilize a human robot exclusion strategy for normal production. Although new generations of robots are envisioned to emulate humans' capabilities to collaborate ...
Comments