Call for Participation
While both industry and research communities focus substantial work on AI, the development of new AI technology and implementation of AI systems are two different challenges. Current AI solutions often undergo limited testing in controlled environments and their performance is difficult to replicate, verify, and validate. To improve reliable deployment of AI and enable trust and confidence in AI systems, implementers need access to leading practices, processes, tools, and frameworks.
Call for Participation now closed.
Our symposium will involve a mix of keynote and invited talks, breakout sessions, and panel discussions. We look forward to explorations of what AI engineering can and should entail.
We encourage participation on topics that explore pillars individually or at intersections. Examples of relevant submissions include (but are not limited to):
- Beyond Accuracy: Enhanced Model Evaluation Metrics
- Design for Human-Machine Teaming
- Evaluating MLOps Pipelines and Tools
- Budget Constraints in Adversarial Machine Learning
- Broad and Wide Scalability Patterns for AI Systems
- How to Tell if Your Dataset is Sufficient to Solve Your Problem
- Maintaining Value Alignment in AI Systems Operations
- Methods for Creating and Demonstrating Trust in AI Systems
Missy Cummings (Duke University)
Rachel Dzombak (CMU SEI)
Matthew Gaston (CMU SEI)
Karen Myers (SRI International)
William Streilein (MIT Lincoln Laboratory)