Novel egocentric robot teleoperation interfaces for search and rescue
Teleoperation is a powerful tool: a person can have conferences and meetings overseas, visit their families abroad, go to uncharted locations, and explore dangerous environments. However, during real-time remote teleoperation, the operator faces various challenges every moment, primarily maintaining remote awareness and performing under high cognitive load. The challenges in search and rescue teleoperation exacerbate its difficulty. To successfully teleoperate a remote robot and accomplish tasks, the operator must maintain a high level of situation awareness by understanding the remote robot’s current states and the environment while being aware of their mission tasks. However, the operator has a limited access to the remote environment through teleoperation interfaces. Additionally, the interfaces can only deliver limited data (i.e., limited field of view and limited types of sensors). To make things worse, in search and rescue teleoperation, the operator must make important decisions that may impact victims’ life with the limited information. As remote teleoperation interfaces are the only gateway for most of the time, how they deliver the information impacts the operator’s situation awareness. Researchers found that the ways of presenting information impact users’ overall task performance in terms of accuracy, completion time, and workload in human-computer interaction and human-robot interaction. We extend this theme to search and rescue teleoperation scenarios. We explore novel interface designs to support the operator by retrieving remote information and presenting them in a way that the operator can understand in time. Our design helps the operator increase their situation awareness and their overall task performance. We further discuss benefits and drawbacks of our implementations further contributes to improving future teleoperation interface designs.