SW-HW Codesign for ML Workloads
Workshop at MLSyS 2020 | March 4, 2020
Software-Hardware Codesign for Machine Learning (ML) Workloads, a Workshop at MLSyS 2020
Machine learning development workflows today involve the siloed design and optimization of task-specific software for a limited number of fixed hardware options. As a result, hardware and software are seen as individual components where the impact of either SW or HW on each other cannot be optimized or assessed jointly. This abstraction leads to computationally inefficient machine learning workloads.
Bridging the Gap Between Software and Hardware
Recently, both software and hardware have taken steps to become more domain specific. Machine learning focused software libraries provide operations and abstractions limited to workload-relevant use cases. Hardware makers have started manufacturing workload-relevant chips in the form of FPGAs, ASICs, and DLAs. However, these efforts are still largely independent of each other, resulting in inefficiencies and less-than-ideal workload performances.
Ideally, hardware and software would be codesigned for a specific ML workload, but investing in a particular hardware design is costly, especially in the face of the rapidly evolving state of ML. This workshop is soliciting extended abstracts that seek to bridge the gap between software and hardware in the areas of model design, model abstractions, model primitives, workload compression, hardware design, hardware optimization for power, data flow optimization, and compiler technologies.
Dr. John G. Wohlbier is a senior research scientist in the Emerging Technology Center at Carnegie Mellon University's Software Engineering Institute. Wohlbier started his career at Los Alamos National Laboratory where he spent over a decade working on computational physics for the US Department of Energy Advanced Simulation and Computing program. After Los Alamos he spent several years supporting DoD HPC programs. His current focus is performance engineering for data intensive software. His interests include computation on modern and emerging hardware, performance engineering, and computational physics.