This presentation describes an extract, transform, and load of key commit and code-quality data from the build system to a code-quality database. It will cover how the database was used to transform a large project team by targeted technical debt reports and email notification of incurred technical debt to developers, leads, and management.
A three-year case study of a 12,000 class, 2 million-line Java system with 25,000 commits of 250,000 program changes by 175 developers will be used. Unit tests increased ninefold to 36,000, raising the branch/line coverage from 27% to 82%. A total of 5,500 technical debt items in 10 debt categories were identified, tracked, and resolved. A similarly sized C# system will also be used along with two different version-control and build systems.
The code-quality database makes technical debt visible and actionable to managers. Architects responsible for not just business enhancements but also the long-term quality of the code base can use the database to keep debt under control, even during a sprint.
Many talk about technical debt in abstract terms. This presentation defines it in terms of precise metrics and how it can be measured and tracked in an Agile manner. Technical debt is some problem in the code that must be fixed. It's more than a degradation of a code metric. Architects need to understand details about the three groups of debt—static analysis, testing coverage, and class metrics.
How much unit test coverage is enough? Management often sets a standard of 80%, assuming the 80/20 rule is the most cost-effective. However, which 20% should go untested? This presentation asserts that 100% is an attainable goal and will detail how 100% coverage was attained as part of normal maintenance.
Debt is incurred under various pressures and is paid back later. This is especially true with coverage and metrics debt. Tracking debt is essential to making it visible and enabling its reduction later. Tracking techniques are explained, and the role of reviewers and code-quality automation is explored.
Techniques for tracking the error rate of actual commits are explained. Outstanding technical debt management reports are presented and explained along with their relationship to a trouble-ticket system.
The management of false positives is explained. About 1% of automated debt is a false positive. Under what conditions can the metrics standard be violated? For example, there are five conditions where a line or branch can't be unit tested.
The role of designing and writing testable code is explored. Architecture decisions can enable higher testability. For example, functional programming enables already tested code structures to be employed. There are good functional programming libraries of pretested logic routines that reduce the unit-testing burden.