Component Mismatches Are a Critical Bottleneck to Fielding AI-Enabled Systems in the Public Sector
December 2019 • Conference Paper
We are investigating classes of mismatches in ML/AI systems integration, to identify the implicit assumptions made by practitioners in different ﬁelds (data scientists, software engineers, operations staff) and ﬁnd ways to communicate the information.
Software Engineering Institute
The use of machine learning or artiﬁcial intelligence (ML/AI) holds substantial potential toward improving many functions and needs of the public sector. In practice however, integrating ML/AI components into public sector applications is severely limited not only by the fragility of these components and their algorithms, but also because of mismatches between components of ML-enabled systems. For example, if an ML model is trained on data that is different from data in the operational environment, ﬁeld performance of the ML component will be dramatically reduced. Separate from software engineering considerations, the expertise needed to ﬁeld an ML/ AI component within a system frequently comes from outside software engineering. As a result, assumptions and even descriptive language used by practitioners from these different disciplines can exacerbate other challenges to integrating ML/AI components into larger systems. We are investigating classes of mismatches in ML/AI systems integration, to identify the implicit assumptions made by practitioners in different ﬁelds (data scientists, software engineers, operations staff) and ﬁnd ways to communicate the appropriate information explicitly. We will discuss a few categories of mismatch, and provide examples from each class. To enable ML/AI components to be ﬁelded in a meaningful way, we will need to understand the mismatches that exist and develop practices to mitigate the impacts of these mismatches.