Summary of "AI Command Center dla PM-a: Jak pracować z Gemini CLI w praktyce?"
Overview
A hands-on demo showing how to use Gemini CLI as an “AI command center” for product management. Demonstrated by Darek (product owner) with Kuba as technical sparring partner / moderator. The goal is to reduce manual PM work by automating context collection, story generation, meeting/transcript analysis, backlog updates and competitor research while keeping a human-in-the-loop.
Goal: automate repetitive PM tasks (context collection, story creation, transcript analysis, backlog updates, competitor research) while maintaining human review and control.
Key technologies and platforms shown
- Gemini CLI / Gemini model (main LLM and agent interface)
- gemini.md — project-level Markdown memory file used as persistent context
- MCP (Model Context Protocol) servers/tools for integrating external systems
- Integrations demonstrated or discussed: Google Drive, Jira/Atlassian, Miro, Confluence, Figma, Perplexity, WebFetch (web scraping), Google Search
- Editor/UX: VS Code with Gemini CLI in terminal + Markdown previews
- Knowledge management: Obsidian for backlinks/graph-based knowledge & storing MD artifacts
- Local small LLMs via Ollama (e.g., 7B) for sensitive data transformation/encryption
- Other agent environments mentioned: Cloud Code, Cursor CLI, Google “Antigravity”
- Automation platforms: Make / n8n-style automations using MCP triggers
Demo / practical workflow (high-level steps)
- Maintain a gemini.md file with project vision, stakeholders, strategy — loaded automatically into the agent context as memory.
- Model business processes as modules (e.g., “Financial Consultation”) stored as MD files.
- Use reusable Gemini CLI commands/templates (defined in project tom files) to generate user stories from a process module. Commands accept placeholders and file references (uses @ references).
- Preview generated Markdown user stories in VS Code; stories include description, acceptance criteria, scenarios and a standardized format.
- Collect meeting transcripts (from Google Drive or other sources) via an MCP server tool and download them into the project.
- Run an “analyze transcript” command to summarize, extract risks and open questions, and produce a condensed Summary.md.
- “Enrich” user stories by merging transcript-derived insights into the user stories (iterative — agent proposes changes, PM reviews/edits, then saves).
- Push selected stories to Jira using the Atlassian MCP tool (add to backlog or create epics).
- Use Obsidian for longer-term knowledge graphs, backlinks and tagging across artifacts created by Gemini CLI.
- Run competitor research via Google Search + Perplexity (through MCP), collect links and build SWOT / strategic hypotheses.
Product features, CLI capabilities and limitations
- Commands/templates: Author-defined, reusable prompt schemas with placeholders to make outputs repeatable and consistent (for example, user story templates).
- File-based memory: gemini.md acts as agent memory and is always included in context.
- MCP tools: Connectors to external services (Google Drive, Jira, Miro, etc.) that agent commands can call.
- File preview + Markdown-first workflow: VS Code preview helps QA formatting before sending to backlog.
- Composition: Commands can call other commands and MCP tools. Current limitation: no robust automatic context transfer across commands (subagents/agent orchestration coming soon).
- Agents/Subagents: Planned feature to enable flows and context handoff between components.
- Hooks: Concept discussed of pre-tool hooks (to detect and shield sensitive content before a tool usage).
- Context management: Gemini CLI auto-compresses context when large; manual compression/branching is possible to manage context windows.
Security, privacy and cost considerations
Data governance options:
- Enterprise API keys / enterprise mode (promised non-training / retention guarantees)
- Zero-retention / no-storage tiers on some providers
- Local models (Ollama) for pre-processing/encrypting sensitive fragments before sending to cloud LLMs
- Manual / programmatic redaction or tokenization (keyword-based obfuscation)
- Pre-tool hooks suggested to automatically route sensitive data to local models
Costs:
- Token/subscription costs for cloud LLM use; PM-style usage often modest (many PMs may stay within a minimal subscription)
- Additional paid services (Perplexity, advanced transcription, Atlassian APIs) incur extra fees
- Local models trade token cost for local compute cost
Best practices and recommendations
- Start small: create gemini.md (project memory), document personas/vision, capture key processes, then automate incrementally.
- Use Markdown consistently to structure information and make it LLM-friendly.
- Use templates to standardize user stories and reduce cognitive load for the team.
- Keep a human-in-the-loop: review, iterate and save AI-generated outputs rather than auto-accept.
- Manage context wisely: split knowledge into smaller files/modules, use selective region prompts (VS Code selection) to limit tokens.
- Secure sensitive data: use enterprise retention settings, zero-retention models, local pre-processing, or keyword-based obfuscation.
- Balance standardization vs flexibility: standardize formats (e.g., story template) but allow teams to keep some freedom.
Extensions, automation and roadmap ideas discussed
- Expand discovery / analytics: connect Amplitude, Google Analytics, database replicas via MCP for metric-driven queries.
- Automated competitor SWOT compilation and hypothesis generation (TOWS-like outputs).
- Organizational roll-out: centralizing standards and read-only oversight for managers to get cross-product risk/strategy visibility.
- Agents / subagents: expected to enable better context handoff and multi-step flow automation.
- Pre-tool hooks & content scanning: detect sensitive content and reroute/obfuscate before cloud calls.
- Cross-platform generalization: same pattern can be used with Cloud Code, Cursor, and other CLI/agent platforms.
Guides, tutorials and demos included or implied
- Defining gemini.md as persistent project memory.
- Creating reusable Gemini CLI commands/templates and using placeholders.
- Generating user stories from a business-process module.
- Fetching and saving meeting transcripts via MCP (Google Drive example).
- Summarizing transcripts and enriching user stories with extracted insights.
- Pushing stories into Jira via Atlassian MCP.
- Integrating project files with Obsidian for knowledge graphs and backlinks.
- Example of local model (Ollama) use for sensitive-data handling.
- Tips for context management (Markdown structure, selection-limited prompts, compression).
Product / tool mentions and alternatives
- Gemini (Google) + Gemini CLI
- MCP (Model Context Protocol) — Anthropic-style connectors/tools
- VS Code (developer UX with terminal + Markdown preview)
- Obsidian (knowledge management)
- Jira / Confluence / Miro / Figma connectors
- Perplexity, WebFetch, Google Search
- Ollama (local LLMs) for privacy-sensitive transformations
- Cloud Code, Cursor CLI, Google Antigravity (mentioned as alternatives/platforms)
- Automation platforms: Make / n8n (MCP triggers)
Practical takeaways
- The approach turns time-consuming PM tasks (transcription processing, story creation, competitor research, backlog population) into semi-automated pipelines, freeing time for strategic work.
- Emphasis on iterative prompt engineering and human QA to avoid hallucinations.
- Architecture relies on file-based project memory, modular commands, MCP connectors and eventual agent orchestration for richer automation.
Speakers / sources
- Main presenters: Darek — Product Owner (demonstrator, built the solution); Kuba — colleague / sparring partner (moderator).
- Other people referenced: Grzeszek (co-creator of a related tool), Pragmatic Coders (organization hosting the webinar).
- Technologies/orgs mentioned: Gemini (Google), MCP (Anthropic-style), Ollama, Perplexity, Jira/Atlassian, Obsidian.
Note: this summary can be converted into a step-by-step quickstart checklist (minimum viable setup) to implement a similar Gemini CLI–based PM command center.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.