On Developing User Interfaces for Piloting Unmanned Systems
April 2014 • Conference Paper
This paper was first published in the proceedings of the International Worksop on Robotic Sensor Networks in April 2014.
Software Engineering Institute
Current artificial intelligence (AI) control paradigms tend to be one-to-one in nature and cater to monolithic systems, e.g., between one operator and one large, multi-functional robot. However, the future of AI is likely to be smaller, more distributed, and larger scale. Along that path, major advances have been made in the commercialization of smaller unmanned autonomous systems (UAS) like quad-copters and small ground- based robots that are equipped with wireless radios for communication between these systems and human operators, control stations, or other UAS. Even for systems
built with capabilities for communication between UAS, the main control paradigm for such systems largely remains the same one-to-one paradigm and is geared toward joystick control or waypoint-based systems that force operators to define the complete paths for each UAS participating in a mission environment. In this paper, we will discuss recent efforts in user interface (UI) design in the Group Autonomy for Mobile Systems project at Carnegie Mellon University. The goal of the UI development
is to reduce the cognitive load forced on human operators, especially those wanting to use swarms of devices in mission critical contexts like search-and-rescue. We detail the coupling of distributed AI algorithms with these map-based, minimalistic interfaces, and highlight the decreased required user actions to accomplish similar swarm-based maneuvers against waypoint-based guidance systems. We believe these types of interfaces may prove pivotal in bringing group-based UAS control into more