17th SEI Software Engineering Workshop for Educators
The SEI hosts this annual Workshop for Educators to foster an ongoing exchange of ideas among educators whose curricula include subjects spanning software engineering. The event is free of charge and open to any accredited college-level educator.
This three-day virtual workshop features two days of SEI training. In lieu of the traditional artifact sharing and discussion on day three, we will conduct an educator-facilitated session on experiences and ideas for virtual education.
Dr. Grace Lewis
Dr. Lewis is principal researcher and lead of the Tactical and AI-enabled Systems (TAS) initiative at the Software Engineering Institute at Carnegie Mellon University. Lewis is the principal investigator for the “Characterizing and Detecting Mismatch in ML-Enabled Systems” research project. She is also a member of the “High Assurance Software-Defined IoT Security” research project and leads the SEI's work in tactical cloudlets. Lewis’ current areas of expertise and interest include software engineering for AI/ML systems, IoT security, edge computing, software architecture (in particular the development of software architecture practices for systems that integrate emerging technologies), and software engineering in society.
How Attendees Describe the Workshop
"a significant aid in teaching software engineering"
"a great source of relevant and timely software education guidance and resources"
National Agenda for Software Engineering Research & Development: Architecting the Systems of the Future
The mission of the SEI Software Solutions Division (SSD) is to advance the state of the practice in software engineering through applied research, development, and transition of innovative technologies for building and acquiring software-intensive systems. SSD is leading an SEI effort to engage the broad software engineering community to define a national agenda for software engineering research and development for the next decade.
Software Engineering for ML Systems: Characterizing and Detecting Mismatch
Despite the growing interest in machine learning (ML) across all industries, the reality is that development of ML capabilities is still mainly a research activity or a stand-alone project, with the exception of large companies such as Google and Microsoft. Deploying ML models in operational systems remains a significant challenge. One problem is that the development and operation of ML-enabled systems involves three perspectives, with three different and often completely separate workflows and people: the data scientist builds the model; the software engineer integrates the model into a larger system; and then operations/release engineers deploy, operate, and monitor the system. Because these perspectives operate separately and often speak different languages, there are opportunities for mismatch between the assumptions made by each perspective with respect to the elements of the ML-enabled system, and the actual guarantees provided by each element. This problem is exacerbated by the fact that system elements evolve independently and at a different rhythm, which could over time lead to unintentional mismatch. For example, if an ML model is trained on data that is different from data in the operational environment, performance of the ML component will be dramatically reduced. This session will present preliminary results of a study to identify common mismatches that occur in the development and deployment of ML systems, as well as the identification of best practices to address these mismatches. The end goal is to codify this information into a set of machine-readable ML-Enabled System Element Descriptors as a mechanism to enable manual and automated mismatch detection and prevention in ML-enabled systems.
Educator-Led Interactive Session: Experiences in Remote Learning and Teaching in the Times of COVID-19
Teaching software engineering in the time of coronavirus poses special challenges because we emphasize real-world experiences and interactions in our pedagogy. At the same time, our business has existing related expertise, such as making remote teams work. Our workshop segment will emphasize learning from each other, as software engineering educators. We will start forums ahead of the synchronous event itself, and bring these to exciting conclusions during that event. Stay tuned for how to start!