Summary of "LangChain Vs LangGraph | Agentic AI using LangGraph | Video 3 | CampusX"
Video overview
- Presenter: Nitesh (CampusX YouTube channel).
- Placement: 3rd video in a LangGraph playlist. Prerequisite: basic familiarity with LangChain (earlier playlist videos recommended).
- Goal: explain why LangGraph exists, give a technical intuition, and compare LangChain vs LangGraph using a worked example (an automated hiring workflow).
LangChain — recap
What it is and strengths:
- Open-source library to simplify building LLM-based applications.
- Provides modular building blocks:
- Model: unified interface to different LLM providers (OpenAI, Anthropic/Claude, Hugging Face, local models).
- Prompts: prompt templates and prompt engineering components.
- Retrievers: connect to vector stores / knowledge bases for RAG.
- Chains: compose components into linear pipelines (output of one feeding the next).
- Tools / Agents: connect LLMs to external APIs or Python functions; build basic agent behaviors.
- Best for: simple, linear, single-session workflows — chatbots, summarizers, basic RAG systems, short-lived agent-style integrations.
Worked example used to motivate differences
Automated hiring flow (high-level steps):
- Receive hiring request.
- Generate job description (JD) via LLM.
- Human approval loop for JD.
- Post JD (e.g., LinkedIn/Indeed APIs).
- Wait 7 days.
- Monitor applications.
- Conditional branches (modify JD if insufficient applicants).
- Resume monitoring.
- Shortlist (resume parsing + LLM scoring).
- Schedule interviews (calendar/email APIs).
- Conduct interviews.
- Send offers and renegotiate if needed.
- Onboarding.
This flow includes conditionals, loops, pauses/waits, external triggers, human approvals, long-running steps, and multiple integrations — illustrating limitations of linear pipelines and motivating an orchestration approach.
Key challenges when implementing complex, non-linear workflows in LangChain
-
Control-flow complexity
- LangChain chains are primarily linear; branching, loops and arbitrary jumps need custom Python glue (while loops, if/else, routing).
- Glue code hurts maintainability and debugging.
-
State handling
- LangChain is effectively stateless for workflow state (it has conversational memory but not a general per-workflow key-value store).
- You must manage global dictionaries or external storage manually and pass state around — error-prone for many evolving fields.
-
Event-driven / long-running execution
- LangChain is designed for synchronous, short-lived execution; not built for pausing and resuming across days (e.g., wait 7 days for applications).
- Workarounds require splitting into multiple chains and external orchestration (more glue code).
-
Fault tolerance / recovery
- LangChain lacks built-in checkpointing and resume semantics; crashes mid-run typically require manual restarts or complex recovery logic.
-
Human-in-the-loop
- LangChain can request synchronous human inputs, but long waits (hours/days) cause resource/compute issues; not first-class for pausing indefinitely until human approval arrives.
-
Nested workflows / reusability / multi-agent coordination
- Hard to encapsulate sub-workflows and reuse them as composable units within larger flows.
-
Observability
- Langsmith provides monitoring for LangChain LLM calls (inputs, outputs, tokens, latency), but it cannot fully observe custom glue code or orchestration logic outside LangChain components — partial observability.
How LangGraph addresses these problems (core ideas & features)
- Graph-as-orchestration
- Represent each task as a node (Python function) and edges as control-flow transitions. The workflow is a graph (non-linear), so branching, loops and jumps are first-class.
- Stateful execution
- Workflows use an explicit state object (pydantic or typed dict), passed to every node. The state is accessible and mutable by nodes, so all nodes share and update workflow state without manual glue.
- Event-driven and long-running flows
- Built-in checkpointing lets you pause/resume workflows (save state to memory or DB), enabling waits for time-based triggers or external/human events.
- Fault tolerance
- Checkpointing + retry logic: small errors can be retried, and large crashes allow resuming from the last checkpoint rather than restarting from the beginning.
- Human-in-the-loop as a first-class capability
- Pause execution indefinitely (minutes/hours/days) awaiting human input; resume when approval/input arrives.
- Nested workflows (subgraphs)
- Nodes can be subgraphs, enabling encapsulation, reuse, and modeling of multi-agent systems (each agent or component can be its own subgraph).
- Observability
- Tight integration with Langsmith — full timeline of node executions, state before/after nodes, messages exchanged, human approvals, and auditable run histories (better tracing than LangChain + glue code).
Practical implications / recommendations
-
Use LangChain when:
- You need simple linear flows: prompt chains, single-session RAG, quick chatbots, basic agents with short-lived tool calls.
- You want the modular LangChain components (models, retrievers, loaders, prompt templates).
-
Use LangGraph when:
- You need complex, non-linear workflows: conditional branching, loops, jumps, multi-step long-running processes, human approvals, event-driven triggers, multi-agent orchestration.
- You require stateful execution, checkpointing, fault tolerance, nested subgraphs and full observability/auditing.
-
Complementarity
- LangGraph is built on top of LangChain and still uses LangChain components (models, retrievers, loaders). Typically build components with LangChain and orchestrate them with LangGraph.
References and tooling mentioned
- LangChain (library)
- LangGraph (orchestration framework from the LangChain team)
- Langsmith (observability / monitoring for LLM applications)
- Other frameworks referenced briefly: CrewAI, Microsoft AutoGen (or similar), and “SD” (context unclear).
- External integrations in the example: LinkedIn API, job boards, calendar APIs, mail APIs, resume parser, vector stores (RAG).
Code / implementation notes highlighted
- In LangGraph you define nodes (Python functions) and add them to a Graph; add conditional edges (edge predicates) and the engine manages execution, state passing, and checkpointing.
- Zero glue code for control flow/state if using LangGraph; with LangChain you typically write glue code to stitch flows and manage state.
Sources / speakers cited
- Main speaker: Nitesh (CampusX YouTube channel).
- External source referenced: Anthropic blog post “Building Effective Agents” (for agents vs workflows distinction).
Summary takeaway: - LangChain: excellent for component-level LLM apps and linear pipelines. - LangGraph: designed to orchestrate robust, production-grade, stateful, event-driven multi-step workflows that require branching, long-running execution, human-in-the-loop, fault recovery, nested subgraphs and observability. - Use both together: build components in LangChain, orchestrate complex flows in LangGraph.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.