search menu icon-carat-right cmu-wordmark

AI Evaluation Methodology for Defensive Cyber Operator Tools

Presentation
The objective of this project is to develop a methodology for evaluating the capabilities of an AI defense using publicly available information of defensive network capabilities.
Publisher

Software Engineering Institute

Abstract

The goal of this project is to create a two-part methodology that will

  • enable the evaluation capabilities of an AI-enabled network DCO tools
  • enumerate the principles by which the efficacy of an AI-based DCO tool might be reduced when subjected to adversarial evasions

The completed extensible evaluation methodology will

  • produce a new capability for the DoD—to test and evaluate the defensive capabilities of an AI defense under realistic conditions
  • represent an increase in the state-of-the-art in broader cybersecurity, as there is not yet a principled methodology for evaluating the defensive capabilities for an AI defense for enterprise networks
  • allow the DoD to repeatably test AI defenses to examine whether they have the desired defensive benefits, representing an increase in capability
  • lead to a deeper understanding about detecting and mitigating obfuscations and data poisoning with next-generation DCO tools

Currently, the project has developed an initial methodology that will permit the DoD to test and evaluate the defensive capabilities of an AI defense under realistic conditions. This methodology is currently undergoing additional refinement and expansion to better increase the information revealed by its application.