Summary of "Top 5 techniques for building the worst microservice system ever - William Brander - NDC London 2023"
Top 5 techniques for building the worst microservice system ever
(Summary of William Brander — NDC London 2023)
A humorous, experience-driven walkthrough of common microservice anti-patterns and how they degrade reliability, performance and maintainability. Examples are .NET-focused but the patterns apply broadly.
High-level theme
This talk uses real-world support experience (Particular Software / NServiceBus) to highlight microservice anti-patterns. The goal is to show how well-intentioned architectural choices can introduce latency, operational complexity, and brittle coupling.
Core concepts covered
- Strangler‑fig migration: extracting functionality from a monolith can preserve original coupling while adding network latency and ops complexity.
- .NET garbage collection and performance: Gen 0/1/2 GC behavior — Gen‑2 promotions (historically stop‑the‑world) can cause big latency spikes; long‑lived threads (e.g., waiting on network) increase Gen‑2 promotions.
- Throughput vs distribution: adding network hops typically reduces throughput and raises contention compared to a monolith.
- Big‑bang rewrite risks: loses incremental delivery, underestimates work, and creates long‑lived integration layers.
- Messaging queues and reliability pitfalls: naive use introduces poison messages, retry storms, dead‑letter queues, and the need for throttling/backoff.
- Not‑Invented‑Here (NIH): building homegrown frameworks wastes effort and increases maintenance compared to battle‑tested libraries.
- Service boundary design: modeling services by nouns (product, order, customer) often creates chatty cross‑service calls when users invoke verbs that span domains.
- Read models / CQRS & eventing: projections and pub/sub reduce synchronous coupling but don’t eliminate logical coupling — business rule changes still need propagation.
- Engine pattern: a central runtime (engine) exposes interfaces; services implement interfaces and drop compiled plugins into the engine. This keeps logical ownership while hosting code elsewhere — useful selectively but introduces coupling and deployment tradeoffs.
- Logical vs physical boundaries: moving code physically doesn’t change logical ownership; conflating them causes subtle coupling.
Five anti‑pattern techniques (and why they hurt)
-
Put HTTP/network calls in front of everything
- Adds network latency, increases thread lifetimes and GC promotions, lowers throughput, and creates topology coupling.
-
Attempt a big‑bang rewrite
- Stops incremental delivery, underestimates effort, and results in long‑living migration layers. Works only for tiny systems or personal CV projects.
-
Replace sync calls with naive queuing and reinvent reliability
- Poison messages can block queues. Naive retries cause backoff storms or DDOS‑like behavior on downstream systems. Teams often end up building custom frameworks instead of using proven tooling.
-
Build everything yourself (Not‑Invented‑Here)
- Reimplementing messaging, retry, UI, etc., for the joy of engineering stalls business delivery and increases maintenance burden.
-
Define services around nouns (objects) rather than verbs (behaviors)
- Leads to chatty, cross‑service requests, API gateway/orchestration sprawl, duplicated read models, and hidden coupling.
Practical notes and recommended alternatives
- Watch for Gen‑2 GC pauses in .NET; distributing work can aggravate GC effects.
- Use established libraries for messaging and resiliency rather than reimplementing:
- NServiceBus (speaker’s employer)
- Rebus, MassTransit (alternatives)
- Polly (transient‑fault handling / retries)
- Use CQRS/read models and eventing to improve read‑side performance, but remember they do not remove logical coupling — you must still synchronize business rule changes.
- Consider the engine pattern selectively to avoid duplicating projection logic across services: keep implementations in the logical owner’s codebase but execute them within a shared runtime. Evaluate coupling and deployment complexity before adopting.
Resources and references
- Particular Software / NServiceBus (practical support experience behind the talk)
- Libraries/tools mentioned: NServiceBus, Rebus, MassTransit, Polly (for retry/transient‑fault handling)
- YouTube: a microservices explainer by “crisam” — recommended for describing the chatty‑services problem
- Historical/illustrative reference: a Stack Overflow incident / blog post about GC pauses (used to illustrate Gen‑2 impact)
Speaker / source
- William Brander — developer and support engineer at Particular Software (NServiceBus); presenter, NDC London 2023.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.