first stable v1.0 release in late October 2025. After spending the past two months working with their latest APIs, I genuinely feel that is essentially the most coherent and thoughtfully designed version of LangChain to this point.
I wasn’t all the time a LangChain fan. The early versions were fragile, poorly documented, abstractions shifted often, and it felt too premature to make use of in prod. But v1.0 felt more intentional, and had a more consistent mental model for a way data should flow through agents and tools.
This isn’t a sponsored post by the best way — I’d love to listen to your thoughts, be happy to DM me here!
This text isn’t here to regurgitate the docs. I’m assuming you’ve already dabbled with LangChain (or are a heavy user). Moderately than dumping a laundry list of points, I’m going to cherry-pick just 4 key points.
A fast recap: LangChain, LangGraph & LangSmith
At a high level, LangChain is a framework for constructing LLM apps and agents, allowing devs to ship AI features fast with common abstractions.
LangGraph is the graph-based execution engine for durable, stateful agent workflows in a controllable way. Finally, LangSmith is an observability platform for tracing and monitoring.
Put simply: LangChain helps you construct agents fast, LangGraph runs them reliably, and LangSmith allows you to monitor and improve them in production.
My stack
For context, most of my recent work focuses on constructing multi-agent features for a customer-facing AI platform at work. My backend stack is FastAPI, with Pydantic powering schema validation and data contracts.
Lesson 1: Dropping support for Pydantic models
A significant shift within the migration to v1.0 was the introduction of the brand new create_agent method. It streamlines how agents are defined and invoked, however it also drops support for Pydantic models and dataclasses in agent state. All the pieces must now be expressed as TypedDicts extending AgentState.
If you happen to’re using FastAPI, Pydantic is commonly the advisable and default schema validator. I valued schema consistency across the codebase and felt that mixing TypedDicts and Pydantic models would inevitably create confusion — especially for brand spanking new engineers who won’t know which schema format to follow.
To unravel this, I introduced a small helper function that converts a Pydantic model right into a TypedDict that extends AgentState right before it’s passed to create_agent . It’s critical to notice that LangChain attaches custom metadata to type annotations which you could preserve. Python utilities like get_type_hints() strip these annotations, meaning a naïve conversion won’t work.
Lesson 2: Deep agents are opinionated by design
Alongside the brand new create_agent API in LangChain 1.0 got here something that caught my attention: the deepagents library. Inspired by tools like Claude Code and Manus, deep agents can plan, break tasks into steps, and even spawn subagents.
Once I first saw this, I wanted to make use of it . Why wouldn’t you would like “smarter” agents, right? But after trying it across several workflows, I realised that this extra autonomy was sometimes unnecessary — and in certain cases, counterproductive.
The deepagents library is fairly opinionated, and really much by design. Each deep agent comes with some built-in middleware — things like ToDoListMiddleware, FilesystemMiddleware, SummarizationMiddleware, etc. These shape how the agent thinks, plans, and manages context. The catch is that you simply can’t control exactly when these default middleware run, nor are you able to disable those you don’t need.
After digging into the deepagents source code here, you may see that the middleware parameter is additional middleware to use after standard middlewareAny middleware that was passed in middleware=[...] gets appended after the defaults.
All this extra orchestration also introduced noticeable latency, and should not provide meaningful profit. So when you want more granular control, keep on with the simpler create_agent method.
I’m not saying deep agents are bad, they’re powerful in the fitting scenarios. Nevertheless, that is a very good reminder of a classic engineering principle: don’t chase the “shiny” thing. Use the tech that solves your actual problem, even when it’s the “less glamorous” option.
My favourite feature: Structured output
Having deployed agents in production, especially ones that integrate with deterministic enterprise systems, getting agents to consistently produce output of a particular schema was crucial.
LangChain 1.0 makes this gorgeous easy. You’ll be able to define a schema (e.g., a Pydantic model) and pass it to create_agent via the response_format parameter. The agent then produces output that conforms to that schema inside a single agent loop with no additional steps.
This has been incredibly useful each time I want the agent to strictly adhere to a JSON structure with certain fields guaranteed. To date, structured output has been very reliable too.
What I would like to explore more of: Middleware
One among the trickiest parts of constructing reliable agents is context engineering — ensuring the agent all the time has the fitting information at the fitting time. Middleware was introduced to offer developers precise control over each step of the agent loop, and I feel it’s value diving deeper into.
Middleware can mean various things depending on context (pun intended). In LangGraph, this could mean controlling the precise sequence of node execution. In long-running conversations, it’d involve summarising collected context before the following LLM call. In human-in-the-loop scenarios, middleware can pause execution and wait for a user to approve or reject a tool call.
More recently, in the most recent v1.1 minor release, LangChain also added a model retry middleware with configurable exponential backoff, allowing graceful recovery for transient endpoint errors.
I personally think middleware is a game changer as agentic workflows get more complex, long-running, and stateful, especially once you need fine-grained control or robust error handling.
This list of middleware is growing and it really helps that it stays provider agnostic. If you happen to’ve experimented with middleware in your personal work, I’d love to listen to what you found most useful!
To finish off
That’s it for now — 4 key reflections from what I’ve learnt to this point about LangChain. And if anyone from the LangChain team happens to be reading this, I’m all the time completely happy to share user feedback anytime or just chat 🙂
Have a good time constructing!
