Summary of "Review agents work with Agent Debug Logs and Chat Debug View | Ep 5 of 6 - VS Code Learn"
Short summary
A walkthrough of the VS Code + GitHub Copilot extension tooling for inspecting agent behavior: Agent Debug Logs and Chat Debug View. The video demonstrates how to find per-session logs, inspect calls to LLMs, and use built-in tools to troubleshoot unexpected agent behavior.
Key concepts and product features
Agent Debug Logs
- Session-specific logs accessible from a chat session (three-dot menu → Show Agent Debug Logs).
- Shows loading steps (instructions, agents, hooks, skills) and the file sources for custom skills.
- Lists each tool call and model call with metadata: model name, duration, token counts.
- Session summary view includes:
- session type / location / status
- created / last activity timestamps
- model turns, tool calls, total tokens, errors, total events
- Agent flow chart visualizes the agent’s sequence of steps; clickable nodes reveal detailed calls.
Chat Debug View
- Exposes raw payloads sent to LLMs: system message, user message, assistant response, user memory, and environment context (date, runtime).
- Shows the chronological chat/tool-call trace and the final outputs produced after tools run.
- Hover/click to see per-call model, latency, and token usage.
Troubleshooting features and workflow
- Built-in commands (for example,
/troubleshoot) analyze unexpected agent behavior by inspecting debug logs (JSONL) and return details such as where skills are being loaded from (user vs workspace level). - Unread sessions are visually marked; session metadata reports context usage and token totals.
- Copilot/VS Code compacts historical context in the background so only salient implementation details consume the active context window.
Practical guide / steps shown in the video
- Open a chat session in VS Code (GitHub Copilot extension).
- Click the session three-dot menu → Show Agent Debug Logs to view session logs.
- Inspect loading entries to confirm custom skills and their file sources.
- Use the Session list to view summary stats (model turns, tokens, errors).
- Open the Agent Flow Chart to trace execution flow and click nodes for details.
- Click Show Chat Debug View to see raw request/response JSON and system/user messages.
- Hover/click individual calls to see model, token counts, and latency.
- Run troubleshooting commands (e.g.,
/troubleshoot) to get automated analysis (where skills loaded from, context breakdown). - Review token usage and context compaction behavior to optimize prompts and token consumption.
Analysis / takeaways
- These debugging views provide granular, transparent visibility into tool calls, LLM requests, and skill loading—useful for diagnosing missing skills, unexpected outputs, or high token use.
- Token-level metadata and flow charts help optimize prompts, trace tool interactions, and understand how Copilot compacts context.
- Debug logs are stored in machine-readable formats (JSONL), enabling programmatic analysis if needed.
Main speakers / sources
- Presenter / VS Code Learn series narrator (demonstration inside VS Code).
- Tools referenced: VS Code and the GitHub Copilot extension (Agent Debug Logs, Chat Debug View).
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...