Summary of "LangChain Vs LangGraph | Agentic AI using LangGraph | Video 3 | CampusX"

Video overview

LangChain — recap

What it is and strengths:

Worked example used to motivate differences

Automated hiring flow (high-level steps):

  1. Receive hiring request.
  2. Generate job description (JD) via LLM.
  3. Human approval loop for JD.
  4. Post JD (e.g., LinkedIn/Indeed APIs).
  5. Wait 7 days.
  6. Monitor applications.
  7. Conditional branches (modify JD if insufficient applicants).
  8. Resume monitoring.
  9. Shortlist (resume parsing + LLM scoring).
  10. Schedule interviews (calendar/email APIs).
  11. Conduct interviews.
  12. Send offers and renegotiate if needed.
  13. Onboarding.

This flow includes conditionals, loops, pauses/waits, external triggers, human approvals, long-running steps, and multiple integrations — illustrating limitations of linear pipelines and motivating an orchestration approach.

Key challenges when implementing complex, non-linear workflows in LangChain

  1. Control-flow complexity

    • LangChain chains are primarily linear; branching, loops and arbitrary jumps need custom Python glue (while loops, if/else, routing).
    • Glue code hurts maintainability and debugging.
  2. State handling

    • LangChain is effectively stateless for workflow state (it has conversational memory but not a general per-workflow key-value store).
    • You must manage global dictionaries or external storage manually and pass state around — error-prone for many evolving fields.
  3. Event-driven / long-running execution

    • LangChain is designed for synchronous, short-lived execution; not built for pausing and resuming across days (e.g., wait 7 days for applications).
    • Workarounds require splitting into multiple chains and external orchestration (more glue code).
  4. Fault tolerance / recovery

    • LangChain lacks built-in checkpointing and resume semantics; crashes mid-run typically require manual restarts or complex recovery logic.
  5. Human-in-the-loop

    • LangChain can request synchronous human inputs, but long waits (hours/days) cause resource/compute issues; not first-class for pausing indefinitely until human approval arrives.
  6. Nested workflows / reusability / multi-agent coordination

    • Hard to encapsulate sub-workflows and reuse them as composable units within larger flows.
  7. Observability

    • Langsmith provides monitoring for LangChain LLM calls (inputs, outputs, tokens, latency), but it cannot fully observe custom glue code or orchestration logic outside LangChain components — partial observability.

How LangGraph addresses these problems (core ideas & features)

Practical implications / recommendations

References and tooling mentioned

Code / implementation notes highlighted

Sources / speakers cited

Summary takeaway: - LangChain: excellent for component-level LLM apps and linear pipelines. - LangGraph: designed to orchestrate robust, production-grade, stateful, event-driven multi-step workflows that require branching, long-running execution, human-in-the-loop, fault recovery, nested subgraphs and observability. - Use both together: build components in LangChain, orchestrate complex flows in LangGraph.

Category ?

Technology


Share this summary


Is the summary off?

If you think the summary is inaccurate, you can reprocess it with the latest model.

Video