Software Engineering Institute | Carnegie Mellon University
Software Engineering Institute | Carnegie Mellon University

Digital Library

Presentation

Architectural Tradeoffs in Learning-Based Software

  • May 2018
  • By Pooyan Jamshidi (Carnegie Mellon University)
  • This presentation shows how architectural tradeoffs in learning-based software stacks are made via a case study of design space exploration in deep neural networks.
  • Publisher: Software Engineering Institute
  • This presentation was created for a conference series or symposium and does not necessarily reflect the positions and views of the Software Engineering Institute.
  • Abstract

    In classical software development, developers write explicit instructions in a programming language to hardcode the explicit behavior of software systems. By writing each line of code, the programmer instructs the software to have the desirable behavior by exploring a specific point in program space.

    Recently, however, software systems are adding learning components that, instead hardcoding an explicit behavior, learn a behavior through data. The learning-intensive software systems are written in terms of models and their parameters that need to be adjusted based on data. In learning-enabled systems, we specify some constraints on the behavior of a desirable program (e.g., a data set of input–output pairs of examples) and use the computational resources to search through the program space to find a program that satisfies the constraints. In neural networks, we restrict the search to a continuous subset of the program space.

    This talk provides experimental evidence of making tradeoffs for deep neural network models, using the Deep Neural Network Architecture system as a case study. Concrete experimental results are presented; also featured are additional case studies in big data (Storm, Cassandra), data analytics (configurable boosting algorithms), and robotics applications.

  • Download

Part of a Collection

SATURN 2018 Presentations