Summary of "Prioritizing Technical Debt as If Time & Money Matters • Adam Tornhill • GOTO 2022"
Core problem and motivation
Software must continuously change while evolution tends to increase complexity. Left unchecked, complexity grows and makes change slower, more unpredictable, and error-prone.
- Different roles see different symptoms:
- Product teams: slower roadmap execution and longer, less reliable estimates.
- Engineers: higher turnover, key-person dependencies, slower delivery.
- Users: more bugs and lower external quality.
- Technical debt is often invisible in code alone. Static-analysis remediation-time estimates rarely match the true extra effort required when working on that code.
Main approach: behavioral code analysis
Key idea: combine code metrics with human and organizational behavior (version-control history) to make technical debt visible and actionable.
Version control (git) is a rich data source — commits, authorship, and evolution patterns reveal where the organization actually works on the code.
Hotspots + code health = prioritization
- Hotspots: identify files or components with high change frequency (commit activity). Activity tends to concentrate in a small part of the system (power-law / Pareto).
- Code health: measure complexity across many attributes (20–25 language-specific factors) rather than relying on a single metric. Common factors include:
- Low cohesion
- “Brain” classes/methods (very large or central methods)
- Copy/paste / duplicated code
- Deeply nested logic
- Prioritization principle: complicated code that is frequently changed is a high-priority hotspot. Complex code that is rarely changed can often be deferred.
Techniques and visualizations demonstrated
- Hotspot visualization: circles representing folders/files sized by complexity and colored by change frequency (red = many commits).
- Hotspots X‑ray: drill down to function level by mapping git commits to individual functions to find frequently changed functions within large legacy files.
- Example (Android platform/framework/base):
- Massive code base (~3M LOC, >2000 developers).
- Top file hotspot:
ActivityManagerService(~20k LOC). - Function-level hotspot:
handleMessage— ~500 LOC, cyclomatic complexity implying ~101 unit tests to fully cover it, modified ~98 times.
- Cyclomatic complexity: has limited predictive power but is useful as a lower-bound estimate for minimum unit-test coverage.
People and social factors
- Legacy code is not the same as technical debt; unfamiliarity (inherited code) often causes the “legacy” perception.
- Contribution concentration matters: loss of core authors can create knowledge black holes. Example: ASP.NET Core showed highly skewed author contributions, meaning losing a top contributor is far more damaging than losing someone from the long tail.
- Add a third axis to prioritization: author/knowledge distribution. High-risk areas = hotspot + poor code health + single-author or author-departure risk.
- Mitigation patterns:
- Pair departing core authors with staying developers to refactor and transfer knowledge.
- Focus pairing and refactoring efforts on prioritized hotspots.
Practical recommendations and workflow
Don’t rely on raw static-analysis debt scores alone — they are often unactionable and demotivating.
Suggested workflow to prioritize technical debt:
- Extract version-control activity to identify hotspots (commit frequency).
- Compute multi-factor code-health metrics per file/function.
- Combine dimensions (hotspot × code health × knowledge/ownership) to rank risk and cost.
- Drill down to functions (hotspots X‑ray) to get safe, focused refactoring targets.
- Mitigate social risk via pairing/onboarding and targeted refactorings.
Tools and further reading:
- Codescene — visualization/analysis tool used in the talk; Community Edition available: https://codescene.com
- Adam Tornhill’s books and blog: “Software Design X‑Rays” and related content — https://adamtornhill.com
- Codescene blog and writeups for practical examples and tutorials.
Caveats and notes
- Code complexity is multifaceted; avoid single-metric thinking.
- Behavioral analysis complements — it does not replace — developer expertise and domain knowledge.
- Visual patterns (color, size, Pareto distributions) help quickly surface priorities but should be interpreted alongside context.
Main speaker and sources
- Speaker: Adam Tornhill (presenter; author of the discussed techniques and books).
- Referenced concepts and sources: Lehman’s laws of software evolution; version-control (git) data; academic research on complexity metrics.
- Tool highlighted: Codescene.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.