search menu icon-carat-right cmu-wordmark

Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams

March 2022 Conference Paper
Allyson I. Hauptman (Clemson University), Beau G. Schelble (Clemson University), Nathan J. McNeese (Clemson University)

This paper was presented at the 2022 AAAI Spring Symposium on AI Engineering.


Software Engineering Institute


Rapid increases in artificial intelligence technologies are resulting in closer, more symbiotic interactions between artificially intelligent agents and human beings in a variety of contexts. One of those contexts that has only recently been receiving much attention is human-AI teaming, where an AI agent operates an interdependent teammate, rather than a tool. These teams present unique challenges in creating and maintaining shared team understanding, specifically when it comes to a shared team ethical code. Because teams change in composition, goals, and environments it is imperative that AI teammates be capable of updating their ethical codes in concert with their human teammates. This paper proposes a two-part model in order to implement a dynamic ethical code for AI teammates in human-AI teams. The first part of the model proposes that the ethical code, in addition to its team role, be used to inform an adaptive AI-agent of when and how to adapt its level of autonomy. The second part of the model explains how that ethical code is consistently updated based upon the AI agent’s iterative observations of team interactions. This model makes multiple contributions to the community of human-centered computing, because teams with higher levels of team cognition exhibit higher levels of performance and longevity. More importantly, it proposes a model for more ethical use of AI teammates on human-AI teams that is applicable to a variety of human-AI teaming contexts and permits room for future innovation.