Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4942373 | Cognitive Systems Research | 2017 | 24 Pages |
Abstract
Agent transparency has been proposed as a solution to the problem of facilitating operators' situation awareness in human-robot teams. Sixty participants performed a dual monitoring task, monitoring both an intelligent, autonomous robot teammate and performing threat detection in a virtual environment. The robot displayed four different interfaces, corresponding to information from the Situation awareness-based Agent Transparency (SAT) model. Participants' situation awareness of the robot, confidence in their situation awareness, trust in the robot, workload, cognitive processing, and perceived usability of the robot displays were assessed. Results indicate that participants using interfaces corresponding to higher SAT level had greater situation awareness, cognitive processing, and trust in the robot than when they viewed lower level SAT interfaces. No differences in workload or perceived usability of the display were detected. Based on these findings, we observed that transparency has a significant effect on situation awareness, trust, and cognitive processing.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Anthony R. Selkowitz, Shan G. Lakhmani, Jessie Y.C. Chen,