Mixed Initiative Sensemaking With Automation

Open Access
Article
Conference Proceedings
Authors: Ahad AlotaibiChris Baber

Abstract: It is possible that either the automation is not 100% correct or that the human knows information that the automation does not; in either case, the human will need to choose whether to follow the recommendation of the automation or not. This research focuses on human sensemaking, specifically how people organise information and how closely it matches what a system does using the data/frame model. Varied levels of automation were simulated in the investigation, as well as different levels of certainty. Answering questions to solve a case involving a group attacking an institution in a given location at a specific time was the scenario that has been used in this study. The sensemaking process was applied using the card sorting technique, and the automation confidence degree was determined using the intelligent analysis approach. The results showed that even though the provided frames are perhaps more practical, people appear to be more consistent when using self-generated frames rather than the provided frames. The way people grouped information was not necessarily the same as how computers did it. Furthermore, people appear to believe information presented by a computer with confidence levels represented by scores or colours. They will accept the computer's confidence predictions and make their own decisions based on them, even if they are not rational.

Keywords: Sensemaking, Automation Reliability

DOI: 10.54941/ahfe100868

Cite this paper:

Downloads
171
Visits
255
Download