Summary of "How to Improve Developer Productivity • Jez Humble • YOW! 2020"
Thesis
Developer productivity should be measured by system- and team-level outcomes (value delivered, speed and stability of delivery), not by simple output proxies (lines of code, story point velocity, or individual utilization). There are reliable, research-backed measures and practices that predict and improve software delivery performance, organizational performance, product quality and developer wellbeing.
Why common metrics are misleading
- Lines of code: measures output, not value. More code usually increases maintenance burden; deleting or minimizing code can be a positive outcome.
- Velocity (story points): designed for team capacity planning, not for cross-team comparison or performance measurement. It can be gamed and discourages collaboration.
- Utilization (“keep everyone 100% busy”): high utilization prevents work like test automation, refactoring, and incident response; it causes thrashing and burnout. Leave slack — recommendation: reserve roughly 20% capacity for unplanned work and improvements.
A valid, reliable measurement framework (DORA)
Use the four DORA metrics to measure software delivery performance:
- Deploy frequency — how often you deploy to production (speed).
- Lead time for changes — time from code commit (or start of change) to production (speed).
- Change failure rate — percent of production changes that require remediation (stability).
- Time to restore service (MTTR) — how long to recover from incidents (stability).
High (elite) performers are both faster and more stable — speed and stability are not a strict trade-off.
High-level practices and capabilities that drive performance
- Continuous delivery (technical practices): make releases routine, low-risk and repeatable via automated pipelines and test automation.
- Lean management practices: limit work in progress, visual management, lightweight approvals, use production feedback for decisions.
- Lean product development practices: small-batch delivery, experimentation, A/B testing, rapid customer feedback.
- Effective leadership: enables teams to experiment, adopt practices, and build a supportive culture.
Practical elements of continuous delivery and the deployment pipeline
- Make releases boring: push-button, low-risk, reproducible at any time (including business hours).
- Deployment pipeline pattern:
- Every change in version control triggers fast checks (unit tests, quick acceptance tests).
- Fix failing early tests immediately.
- Successful builds go downstream for broader automated tests and then human-led checks (exploratory, usability, performance, security).
- Deliver feedback and fixes as close to the change as possible (shift-left testing).
- Focus optimization on reducing lead time for changes. Ask: “How long to deploy a one-line change in the regular process?”
- Integrate security tests into the pipeline so InfoSec enables teams rather than acting solely as a gatekeeper.
Team autonomy and architecture — five readiness questions
Can the team answer “yes” to each?
- Can the team make large-scale design changes without permission from outside the team?
- Can the team complete its work without fine-grained coordination with other teams?
- Can the team deploy and release its product/service on demand, independently of dependent services?
- Can the team do most testing on demand without requiring an integrated test environment?
- Can the team perform deployments during normal business hours with negligible downtime?
Answering “yes” reduces cross-team dependencies and speeds delivery; achieving this may require redesigning org/team boundaries and technical architecture.
Cloud — how to get value from it
- Refer to NIST’s five cloud characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service.
- Many organizations “move to the cloud” without changing processes and see no delivery improvements. Cloud creates major benefit only when paired with cloud-native practices (infrastructure as code, self-service, automation).
Management and product practices that must be combined
- Lean/flow practices: limit WIP, visual boards, fast feedback from production, internal lightweight approvals.
- Product practices: small batches, iterative delivery, A/B testing, built-in UX research and feedback loops, organizational permission to experiment.
- These practices are complementary; doing one alone rarely produces the full benefits.
Culture — measurement and why it matters
- Use a culture model (Westrum) that measures cooperation, communication, responsibility and handling of failure. Cultures typically fall on a spectrum:
- Pathological (power-oriented) — scapegoating, hiding bad news.
- Bureaucratic (rule-oriented) — rigid process and rules.
- Generative (performance-oriented) — encourage information flow, shared responsibility, experimentation.
- Psychological safety (Google research): the biggest predictor of high-performing teams. Teams must feel safe to surface bad news and take risks; leaders must model this behavior.
Individual productivity — how to think about it
- Do not rank individuals by raw output. Prefer “felt” or experienced productivity: the ability to complete complex, time-consuming tasks with minimal interruptions (flow).
- Drivers of individual productivity:
- Psychological safety and supportive team culture
- Good tooling and search (internal and external)
- Low technical debt (so work is easier to change)
- Reduced interruptions and protected focus time
- Technical debt reduction tactics:
- Loosely coupled architecture to reduce dependencies
- Code maintainability (discoverability, reuse, dependency management)
- Effective monitoring and observability for easier debugging
Wellbeing and burnout
- Improved delivery practices and productivity increase the ability to recover from work (detach) and reduce burnout risk.
- Productivity improvements enable better coping with work stress and lower long-term burnout.
COVID / remote work findings
- Early GitHub research shows developer activity stayed consistent or increased during the pandemic/remote shift.
- Flexible tools and processes can support developer productivity in remote contexts; many remote-friendly practices can be preserved post-pandemic.
Concrete recommendations / checklist
Stop using these as performance measures:
- Lines of code
- Story-point velocity as a cross-team metric
- 100% utilization
Start measuring and improving with DORA metrics:
- Deploy frequency, lead time for changes, change failure rate, time to restore service
Build and maintain a deployment pipeline:
- Trigger on every version-control change
- Run fast unit tests and a few acceptance tests immediately
- Run broader automated tests downstream (integration, performance, security)
- Enable rapid feedback and fixes close to the change
Other recommendations:
- Allocate and protect slack: reserve ~20% capacity for automation, refactoring, and unplanned work.
- Invest in continuous delivery practices: trunk-based development, CI, automated testing, database change management, observability.
- Design teams and architecture to reduce dependencies (aim to answer the five autonomy questions “yes”).
- Use cloud correctly: self-service, automation, infrastructure as code — ensure process change, not just infrastructure relocation.
- Combine lean management and lean product practices: limit WIP, visualize flow, collect production feedback, enable experiments and A/B tests.
- Build psychological safety: leaders model blameless learning, encourage reporting of bad news, and reward collaboration.
- Reduce technical debt: prioritize maintainability, observability, and loosely coupled design.
- Measure felt productivity (surveys) and act on drivers (tools, culture, technical debt) rather than ranking individuals.
- Integrate security into the pipeline — shift InfoSec from gatekeeper to enabler with preapproved libraries and automated checks.
Notable examples and cautionary stories
- Equifax breach: long lead times and unknown vulnerable components led to a large breach — illustrates why lead time and dependency inventory matter.
- Cloud migrations that do not change processes often deliver no delivery benefit and can be costly.
- Zero-downtime, business-hours deployments can be achieved even in regulated or legacy environments, but require systematic work.
Where to find the research and deeper guidance
- DORA / State of DevOps reports and research (multiple years of empirical data): Google Cloud DevOps research pages — cloud.google.com/devops.
- GitHub reports and analysis (including pandemic/remote-work impacts) by Nicole Forsgren and colleagues.
People and sources mentioned
- Jez Humble — author, SRE, DORA co-founder
- Dr. Nicole Forsgren — principal investigator for DORA research; later GitHub researcher
- DORA (DevOps Research and Assessment)
- Google Cloud — current home of DORA research and publisher of State of DevOps content
- Puppet — early partner in DORA research
- W. Edwards Deming — “build quality into the product”
- Mary and Tom Poppendieck — Implementing Lean Software Development
- Ward Cunningham — coined “technical debt”
- NIST — source for cloud characteristics
- GitHub — source of COVID/remote work analysis
- Examples cited: Equifax, Ticketmaster, and a Sun/Suncorp-like Australian organization
Note: the original subtitles contained transcription errors for some names; common corrections are applied (e.g., Jez Humble; Nicole Forsgren; Mary & Tom Poppendieck; Ward Cunningham).
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.