Threats for Machine Learning
October 2020 • Webinar
Mark Sherman
Mark Sherman explains where machine learning applications can be attacked, the means for carrying out the attack and some mitigations you can use.
Watch
Abstract
This webcast illustrated where machine learning applications can be attacked, the means for carrying out the attack and some mitigations that can be employed. The elements in building and deploying a machine learning application are reviewed, considering both data and processes. The impact of attacks on each element is considered in turn. Special attention is given to transfer learning, a popular way to construct quickly a machine learning application. Mitigations to these attacks are discussed with the engineering tradeoffs between security and accuracy. Finally, the methods by which an attacker could get access to the machine learning system were reviewed.
About the Speaker

Mark Sherman
Dr. Mark Sherman is the Technical Director of the Cyber Security Foundations group in the SEI's CERT® Division at the Carnegie Mellon University Software Engineering Institute. His team focuses on ...
Dr. Mark Sherman is the Technical Director of the Cyber Security Foundations group in the SEI's CERT® Division at the Carnegie Mellon University Software Engineering Institute. His team focuses on foundational research on the life cycle for building secure software and on data-driven analysis of cybersecurity. Prior to joining CERT, Dr. Sherman was at IBM and various startups, working on mobile systems, integrated hardware-software appliances, transaction processing, languages and compilers, virtualization, network protocols and databases. He has published over 50 papers on various topics in computer science.