Summary of "#17: Testing inthe AI era"

Core thesis

AI is changing how we build and test software. Code generation is fast and cheap, increasing the volume of code and shifting the emphasis toward disciplined testing and careful architecture. Testing is moving from a separate verification step toward being a way to encode and enforce requirements directly in code.

Key technological concepts and practices

Tools and patterns mentioned

Practical recommendations

  1. Start with architecture and include testing requirements up-front (interfaces, unit tests, linting).
  2. Put good examples and desired patterns in the repository (agents.md) to provide context for future AI sessions.
  3. Use AI for tedious, repetitive work (scaffolding, boilerplate, basic UI tests) but always review generated code and tests.
  4. Prioritize automation for stable seams (APIs, services) and use exploratory/AI-driven tests for discovery and edge cases.
  5. Treat AI test outputs as information—require human review, reproduce findings, and attach concrete evidence (repro scripts, PoC) before acting.
  6. Combine AI-generated inputs with randomized/fuzzing scripts to broaden coverage.
  7. Invest in auditing, history, and coverage reporting when running agentic or long-running automated audits.

Risks and caveats

References and resources

Speakers and sources

Category ?

Technology


Share this summary


Is the summary off?

If you think the summary is inaccurate, you can reprocess it with the latest model.

Video