Summary of "Agentic AI With Langgraph And MCP Crash Course-Part 1"
High-level summary
This is Part 1 of a three-part crash course on building “agentic” AI applications using Langraph. Part 1 (this video) focuses on fundamentals and a long, hands‑on coding walkthrough. Later parts cover advanced agent/workflow patterns (Part 2) and end‑to‑end MLOps/deployment/evaluation (Part 3).
Part 1 is a practical tutorial that walks through:
- Project setup
- Creating a stateful graph chatbot
- Integrating external tools
- Adding memory and human‑in‑the‑loop behavior
- Streaming outputs
- Implementing a react‑style agent loop
- Building and consuming MCP servers (multi‑server tool endpoints)
Course plan / guides shown
- Part 1 (this video) — fundamentals: chatbots, tools, multi‑tool integration, memory, human‑in‑the‑loop, streaming, building MCP from scratch, graph concepts (nodes/edges/state), graph API vs functional API.
- Part 2 — advanced Langraph: multi‑agent workflows, multistate management, functional APIs, debugging & monitoring with Langraph Studio.
- Part 3 — end‑to‑end projects: LMOps pipelines, deployment (Hugging Face Spaces), evaluation metrics & tracking (MLflow, AWS), dashboards (Grafana).
Tools and libraries used (installation & setup)
- Project manager:
uv— a Rust‑based fast Python project manager used to initialize projects, create venvs, and install requirements (examples:uv init,uv venv,uv -r requirements.txt).
- Key Python packages referenced:
langgraphlangchain(and adapters)lang(for tracking/evaluation)langchain_grock/ Grock providerlangchain_tavly(web search)langchain_mcp_adaptersfast-mcpipykernel(for Jupyter)
- LLMs:
- Grock chat models and Llama‑3 style model names shown; OpenAI noted as an alternative.
Core Langraph concepts explained and demonstrated
State graph
The state graph has three core components:
- Nodes — implementations (units of work)
- Edges — control/flow between nodes
- State — shared variables carried through execution
Example flow: YouTube URL → transcript node → title node → content node.
State variables and reducers
- State is typed (using
Annotatedtyping in examples). - Reducers control how state fields are updated; e.g.,
messages: listwith a reduceradd_messagesappends conversation history instead of overwriting.
Node definitions and graph building
- Build graphs with APIs such as
graph.builder.add_nodeandadd_edge. - Compile graphs and visualize (Mermaid/png).
- Run synchronously with
graph.invoke.
Streaming
- Streaming APIs:
graph.streamandgraph.stream_events(sync and async variants). - Streaming modes:
mode="updates"— stream only the current update (useful for incremental single-field updates).mode="values"— stream accumulated values (e.g., full conversation/history lists).
- Events provide lower‑level details (e.g.,
event.values, event types) useful for UI integrations and debugging.
Tool integration and agents
- Tools are regular functions (or adapters) converted into callable tools. Docstrings on tools are important — the LLM uses them to decide when to call a tool.
- Bind tools to the LLM with
llm.bind_tools(tools)so the model can call tools by name/args. Example tools: Tavly search for web search, a custommultiplyfunction. - Use prebuilt tool nodes and conditional edges so graph execution routes to tools (or to the end) depending on whether the assistant response indicates a tool call.
- React agent pattern:
- The LLM acts as a controller in an act/observe/reason loop.
- Tools return to the LLM (not straight to the final output), enabling multi‑step reasoning where the LLM decides next tool calls (e.g., “provide recent AI news and then multiply 5×10”).
Memory and checkpointing
- Checkpointing can be added using a
MemorySaver(an in‑memory checkpoint saver) when compiling the graph (checkpointparameter). - Use a unique session/thread ID per session to persist and recover conversation context across graph invocations — enables continuity (e.g., “Do you remember my name?”).
Human‑in‑the‑loop
- Demonstrated an interrupt/resume flow:
- Create a
human_assistancetool that triggers an interrupt and waits for human input. - Resume graph execution by sending a resume command with the human‑provided data.
- Create a
- Useful for approval or feedback steps inside workflows.
MCP (multi‑server tool) tutorial
Architecture
- MCP servers expose tools.
- An MCP client aggregates servers and provides tools to the LLM/agent.
- The application talks to the client to use tools across servers.
Examples built
Two MCP servers built using fast-mcp:
- Math server
- Exposes
add/multiplytools. - Uses stdio transport for local stdio‑based tool calls (useful for local CLI testing).
- Exposes
- Weather server
- Exposes
get_weather. - Runs as a streamable HTTP server (exposes an HTTP endpoint, e.g.,
/mcp).
- Exposes
Client and agent
- Client:
langchain_mcp_adaptersclient connects to multiple MCP servers (stdio and streamable HTTP transports), pulls tool metadata, and registers tools with the agent. - Agent: create a react agent (e.g.,
create_react_agent) that can dispatch calls to tools on different servers. - Demonstrated workflow:
agent.invokeroutes arithmetic queries to the math server (stdio transport) and weather queries to the weather HTTP server.
Practical tips & notes
- Kernel restarts may be required when adding environment variables or APIs (demonstrated in the tutorial).
- Tool docstrings matter — the LLM uses them to decide tool applicability.
- Recompile graphs after code changes (for example, when changing a tool implementation).
- Streaming and event debugging modes are useful for building UIs and for low‑level introspection.
- Part 3 will cover production concerns: LMOps, metrics, MLflow, AWS, Grafana dashboards, and deployments (Hugging Face Spaces).
Main speaker and technologies mentioned
- Presenter: refers to himself as “Kish” / “Kush” — primary instructor of the crash course.
- Technologies / libraries:
- Langraph (graph API and state graph)
- LangChain
- lang (tracking/evaluation)
- Grock (LLM provider)
- Tavly / Tably (web search API)
- fast‑mcp
- langchain_mcp_adapters
- Jupyter / IPyKernel
uvpackage manager- LM models (Llama 3 referenced)
- MLflow / AWS / Grafana / Hugging Face (mentioned for later parts)
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...