DIGITAL LIBRARY
THE USE OF GAME PLAY IN A COMPARISON OF HUMAN ROBOT INTERACTIONS AND THE ADOPTION OF LEARNING STRATEGIES
1 University of California, San Diego (UNITED STATES)
2 University of Nebraska, Lincoln (UNITED STATES)
About this paper:
Appears in: INTED2024 Proceedings
Publication year: 2024
Pages: 5262-5268
ISBN: 978-84-09-59215-9
ISSN: 2340-1079
doi: 10.21125/inted.2024.1359
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
The increasing reliance on AI systems and robotics has intensified the need for their safe, effective, and efficient use. Kasparov’s law suggests that weak humans with strong processes and weak machines can outperform strong humans with inferior processes and strong machines.[1] Thus, the ultimate benefit of AI and robotics depends directly upon the quality of the processes used in human and machine interaction. One of the critical aspects of these processes involves the level of trust implicit in this relationship of the human for the AI, and of the AI for the human. As these authors have noted, while technological advances have increased the agility, power, and safety features of AIs that interact with humans, there remains a reluctance to engage with AI in everyday physically-embodied praxis in circumstances that require trust and human connection. The importance of the development of trust and trustworthiness have been established as critical in the development of ethical science [2][3] and in creating an atmosphere within the classroom conducive for students and teachers to create an effective learning environment particularly when engaging in difficult conversation [4].

This research analyzes the effect of the mode of AI/Robotic behavior in the human relationship. The work builds on prior work [5] using the strategic game of Rock Paper Scissors (RPS) as a performative demonstration case study for ideas of "play", "surprise", and "learning" between human and non-human robotic participants.

The series of games between human|robot and robot|robot interactions can be structured for a number of different goals informed by the differences in the way the robots functionally operate. The humanoid robot used in this study can be programmed to incorporate machine learning by using fixed, pre-programmed, wholly randomized strategies in real time play, thereby approaching a viable mixed-strategy Nash Equilibrium in classic game theory. Alternatively, the co-robotic arm used in this study is controlled via a Python interface that may utilize many interactive implementations, representing a viable test model of bounded rationality in evolutionary game theory. One goal is to maximize the number of wins over the human opponent. Another goal is to improve the game play of the human opponent. At the conclusion of each series of games, the human opponent will be asked a series of questions about their experience which includes their attachment to the AI/Robot, their level of trust, and their desire to play additional games in the future. This information will be compared with the skill level of the human player, and the skill level (aka algorithms) with which the AI/Robot played the game. This information will be used to produce insights into how humans and AI/robots might most effectively partner and learn. Contrasting the difference between learning-enabled, predictive AI systems (co-robotic arm) and rule-based, infinitely rationalized decision-making (humanoid robot) emphasizes similarities and differences between contemporary definitions of "learning" in terms of human cognition and machine learning.
Keywords:
Artificial Intelligence, Game Theory, Trust, Surprise, Human|Robot Interaction.