LangChain is the most widely used framework for building with large language models — over 1,000 integrations, millions of developers, and the largest open-source ecosystem in the space. LangGraph is LangChain's own recommended approach for building agents: a graph-based state machine that makes complex, multi-step AI workflows reliable, inspectable, and controllable.
They are related but distinct. LangChain is a toolkit — a large collection of pre-built components for working with AI models: connectors for different LLMs, tools for searching the web, reading documents, querying databases, and chaining operations together. Think of it as a parts catalogue for building AI applications.
LangGraph is a framework built on top of LangChain specifically for agents. Where LangChain gives you the parts, LangGraph gives you the architecture. It defines how those parts are assembled into a reliable, stateful workflow that can loop, branch, pause for human input, and recover from errors.
LangChain itself recommends using LangGraph for any agent system, rather than LangChain's older agent abstractions. The two work together: LangGraph provides the flow control, LangChain provides the tools and model connections.
LangChain launched in October 2022 — just weeks before ChatGPT — and captured early adopters building the first wave of LLM applications. By the time others emerged, LangChain had the biggest ecosystem, the most documentation, and the most Stack Overflow answers. Being first mattered enormously in a new field.
The practical effect: if you search for how to do something with AI agents, there is a strong chance the answer involves LangChain. More integrations, more examples, more developers who know it.
LangGraph represents agent workflows as a graph — a set of nodes (each node is a step, a model call, or a tool) connected by edges (transitions between steps). This is more powerful than a simple linear chain for three reasons:
Who should use LangChain/LangGraph: Developers building production AI applications who want the largest ecosystem, the most integrations, and the ability to find help when things break. It is not the simplest framework to learn, but it is the most capable and the most supported.
LangChain and LangGraph are both free and open source under the MIT licence. You can download and use them with no cost. LangSmith — the observability and debugging platform built by the same team — has a free tier (up to 5,000 traces per month) and paid plans starting at $39/month per user for teams. LangSmith is optional but highly recommended for production use.
LangChain's core value is breadth. The framework provides standardised interfaces for the components every AI application needs:
LangGraph adds workflow architecture on top. A LangGraph application is defined as:
Use these with Claude, ChatGPT, or any capable model when building with LangChain and LangGraph.
LangGraph implements a stateful, cyclical computation graph where each node is a function and edges define control flow. This is architecturally similar to finite state machines but with typed state objects rather than discrete states, and conditional edges that compute the next state transition at runtime.
The central abstraction is the StateGraph — a graph that carries a typed state object (defined as a TypedDict or Pydantic model) through its nodes. Each node is a Python function with signature def node_fn(state: StateType) -> StateType — it receives the current state and returns an updated state. LangGraph merges the returned partial state into the full state using a configurable reducer (defaulting to last-write-wins).
Edges are added with add_edge(source, target) for unconditional transitions and add_conditional_edges(source, condition_fn, mapping) for branches. The condition function receives the state and returns a string key that maps to a target node name. This enables routing logic: "if the agent decided to use a tool, go to the tool execution node; if it produced a final answer, go to the end node."
LangGraph's checkpointing system saves the full state at every node execution. Checkpointers are pluggable backends: in-memory (for development), SQLite (for single-process persistence), and PostgreSQL (for production multi-process deployments). The checkpoint system enables:
interrupt_before or interrupt_after on any node), the state can be inspected and modified externally, and execution resumes from the modified stateLangGraph supports two primary multi-agent patterns:
A supervisor node (backed by an LLM) receives the overall goal and routes to specialist sub-graphs. Each specialist is itself a compiled LangGraph graph, called as a tool. The supervisor observes the specialist's output, decides whether it is satisfactory, and either routes to the next specialist or asks the same specialist to revise. This is the most controllable multi-agent pattern.
LangGraph supports subgraphs — compiled graphs used as nodes within a parent graph. The parent graph manages the overall workflow; each subgraph manages a self-contained sub-workflow. State can be passed between parent and subgraph via an input/output mapping. This allows genuinely modular agent systems where each sub-workflow is independently testable.
LangGraph supports token-level streaming from model calls via graph.astream(). Both node outputs and intermediate model tokens can be streamed to the client, enabling responsive UIs that show progress as the agent works. This is significant for production applications where multi-step agent workflows may run for 30–120 seconds.
LangGraph supports MCP (Model Context Protocol) tool servers via the langchain-mcp-adapters package. Any MCP-compliant tool server can be loaded as a LangChain tool and used inside a LangGraph node. This means the full ecosystem of MCP tool providers (Claude Desktop integrations, third-party MCP servers) is available to LangGraph agents. Official documentation at python.langchain.com.
LangSmith provides distributed tracing for LangChain and LangGraph applications. Each run generates a trace tree showing every node execution, model call, token count, latency, and cost. Traces are queryable and filterable. The evaluation framework allows automated testing against datasets with custom evaluators. LangSmith is the primary tool for identifying failure modes, measuring quality, and tracking costs in production LangGraph deployments. Pricing: free tier (5,000 traces/month), Developer ($39/user/month), Plus ($299/user/month). Official documentation at docs.smith.langchain.com.
Source note: All technical specifications in this guide are drawn from official LangChain and LangGraph documentation, LangSmith documentation, and the LangChain GitHub repository. Pricing figures are from the official LangSmith pricing page, verified April 2026.