Automating Mismatch Detection and Testing in ML Systems
November 2022 • Presentation
This project improves the formalization of the detection of machine learning (ML) mismatch.
Software Engineering Institute
This project builds on a set of SEI-developed descriptors for elements of ML-enabled systems. We are developing a suite of tools to (1) automate ML mismatch detection and (2) demonstrate how to extend descriptors to support testing of ML-enabled systems. The tools will also support descriptor validation on open source and DoD ML systems and components. For testing, we are explicitly focusing on production readiness of ML components that we define based on several attributes: ease of integration, testability, monitorability, maintainability, and quality, which we define as meeting both model requirements and system requirements.
This project’s end goal is for DoD organizations to adopt descriptors and tools for early mismatch detection and production-readiness test and evaluation as part of their ML-enabled systems development process. To this end, this project contributes to and advances the SEI’s modernizing software development and acquisition objective, both by improving formalization of detection of ML mismatch, improving testing practices for ML-enabled systems, and providing tool support that in the long run can be integrated into ML-enabled system-development toolchains.