search menu icon-carat-right cmu-wordmark

All Architecture Evaluation Is Not the Same: Lessons Learned from More Than 50 Architecture Evaluations in Industry

May 2013 Presentation
Matthias Naab (Fraunhofer IESE), Thorsten Keuler (Fraunhofer Institut Experimentelles Software Engineering), Jens Knodel (Fraunhofer IESE)

A presentation from the ninth annual SATURN conference, held in Minneapolis, MN, April 29 - May 3, 2013.


Software Engineering Institute



Architecture evaluation has become a mature subdiscipline in architecting with much high-quality practical and scientific literature available. The literature does a good job of describing methods for evaluating particular quality attributes. However, detailed information on characteristics and context factors in concrete industrial settings is harder to find. After performing more than 50 architecture evaluations for industrial customers in recent years, we have collected interesting facts and findings about architecture evaluations in practice. In this presentation, we share these with other practitioners and researchers.

This session should be of special interest for two groups of stakeholders: those who need insights about their systems and might want to ask for an architecture evaluation and those who are actively involved in architecture evaluations. We demonstrate a spectrum of diversity in architecture evaluations that might surprise even experienced practitioners.

Our main goal is to present the condensed experiences of more than 50 architecture evaluations. This will help enable practitioners to classify their own architecture evaluations and to gain inspiration on the general topic of architecture evaluation. We package our lessons learned, commonalities, and unique factors of concrete cases, and we describe the architecture evaluation projects according to different characteristics. For the characteristics, we outline the bandwidth of experiences and show illustrative examples.

First, we describe the evaluation projects according to contextual factors:

  • What is the organizational constellation of the architecture evaluation? Is the company ordering the evaluation also the one developing the product evaluated?
  • Which stakeholder ordered the architecture evaluation?
  • In what context was the architecture evaluation ordered? Was the product in trouble, or was this a proactive measure?
  • What was the key goal? What questions were used to evaluate it?
  • What was the system under evaluation (anonymized from systems of diverse industries)?

Second, we describe the planning and setup of the architecture evaluation project itself:

  • How much effort do architecture evaluation projects require, and how is effort distributed among the team responsible for the product under evaluation and the evaluation team?
  • Which architecture evaluation methods were used to answer the evaluation questions?

Third, we report on outcomes of the evaluation projects:

  • What were the key results and findings of the architecture evaluations?
  • What follow-up activities did customer organizations engage in after the architecture evaluation?
  • What further benefits did customers gain from the architecture evaluation?

Reporting these experiences, we allow practitioners to get an overview of the nature and characteristics of industrial architecture evaluations. This complements the available literature on architecture evaluation methods. Thus, practitioners should be able to better judge their own situations and to know when architecture evaluations might be helpful and what they can expect from the evaluation.