認知科学
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
特集 合理性をめぐる認知科学
AIの合理性と人間–AI系の合理性を目指す信頼較正
山田 誠二
著者情報
ジャーナル フリー

2022 年 29 巻 3 号 p. 364-370

詳細
抄録

In this paper, we discuss AI’s rationality and explain the rationality of a human-AI system in our adaptive trust calibration. First, we describe AI’s rationality by introducing the formalization of reinforcement learning. Then we explain our adaptive trust calibration which has been developed for rational human–AI cooperative decision making. Safety and efficiency of human–AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to execute trust calibration.

著者関連情報
© 2022 日本認知科学会
前の記事 次の記事
feedback
Top