LangChain memory is deprecated — what to use in 2026 (JavaScript)
This article covers LangChain.js (JavaScript/TypeScript). For Python LangChain, see the note at the end.
Three memory APIs in 18 months. Each one deprecated the last, each migration required a near-complete rewrite. If you've built on LangChain.js memory, you've lived this:
ConversationBufferMemory— deprecated when LCEL arrivedRunnableWithMessageHistory— deprecated in v0.3, described as unintuitive in the official migration docs- LangGraph Checkpointer — the current answer, with its own issues
The pattern: memory was coupled to the orchestration layer. When the orchestration was redesigned, memory broke with it.
This isn't a complaint about LangChain. The team is iterating fast and the framework is improving. But if your memory layer is tightly coupled to the framework, every framework improvement becomes a memory migration. That's the problem worth solving.
What went wrong with each API
ConversationBufferMemory (v0.1)
The original. Stored everything in the chain object:
import { BufferMemory } from "langchain/memory"; // deprecated
const memory = new BufferMemory();
const chain = new ConversationChain({ llm, memory });
No persistence (died with the process), no scoping, no deduplication. User preferences, scratch notes, and stale facts accumulated in one undifferentiated list. When LCEL replaced chains, BufferMemory had no migration path. Issue #5235 captures the experience: developers using BufferMemory with Next.js 14.2+ had builds fail entirely, and the community workaround was a hacks.ts file with dummy imports to keep the bundler happy. One commenter's advice: "You just have to hurt yourself a bit more by figuring out where from you could import those two."
RunnableWithMessageHistory (LCEL era)
Introduced persistence, but the API was genuinely confusing. You needed a getMessageHistory factory function that returned a BaseChatMessageHistory for each session ID. The types were complex, the wiring was fragile, and debugging was painful when things went wrong.
The official migration docs acknowledged the API was hard to use. Deprecated in v0.3.
LangGraph Checkpointer (current)
The current official answer. Works well within LangGraph: thread-internal state is automatically checkpointed at each graph step. But it has real limitations for general memory:
langgraph devcan silently ignore custom Checkpointer configuration depending on how you create the agent. State ends up in-memory during development, destroyed on every hot reload. You think you have persistence, but you don't.- Only covers thread-internal state. Cross-session memory (user preferences that persist across all conversations) requires a separate
Storeinterface. - Requires adopting LangGraph's execution model. If you're using plain LCEL or a custom agent loop, checkpointers don't help.
Here's the part that concerns me most: @langchain/langgraph has about 1.8M weekly npm downloads, compared to 3.5M for @langchain/core. That means roughly half of LangChain.js users haven't adopted LangGraph. Yet LangGraph is now the only officially supported way to do memory. Those ~1.7M weekly downloads that use @langchain/core without @langchain/langgraph are either stuck on deprecated memory APIs or rolling their own.
The root cause: coupling
Every LangChain.js memory solution was tightly coupled to a specific abstraction layer:
BufferMemory → coupled to ConversationChain
RunnableWithMsgHist → coupled to LCEL Runnables
Checkpointer → coupled to LangGraph
When the layer was redesigned, the memory solution broke. This isn't a bug. It's structural.
Think of it like coupling your database schema to your web framework. Nobody rebuilds their Postgres schema when they upgrade Express to Fastify. The storage layer and the application layer are independent concerns. But LangChain's memory has always been part of the application layer, and every time that layer gets redesigned (which is often, and should be), the memory goes with it.
The LangChain team will keep evolving their orchestration. They should. The Hacker News thread "Why we no longer use LangChain" had developers calling the abstractions "5 layers deep." The team is simplifying. That's good. But if your memory lives inside the orchestration, it will keep breaking.
The fix is architectural: decouple the memory layer from the framework.
Coupled: LangChain version → memory API version (breaks on upgrade)
Decoupled: LangChain version → (no effect) → memory version
Decoupled memory with db0
db0 is an open-source memory system that stores agent knowledge in SQLite or PostgreSQL, independent of which framework version you're running.
npm install @db0-ai/langchain @langchain/core
import { createDb0 } from "@db0-ai/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
const memory = await createDb0();
const agent = createReactAgent({
llm: new ChatOpenAI({ model: "gpt-5.4-mini" }),
tools: [...memory.tools], // db0_memory_write, db0_memory_search, db0_memory_list
});
// Run 1 — user states a preference
await agent.invoke({
messages: [{ role: "user", content: "I prefer TypeScript over Python" }],
});
// Run 2 — agent can recall the preference
const result = await agent.invoke({
messages: [{ role: "user", content: "What language should I use for this project?" }],
});
// Agent has access to the TypeScript preference from Run 1
The agent gets three tools: db0_memory_write (store a fact with scope and tags), db0_memory_search (semantic search across stored facts), db0_memory_list (list memories by scope).
Memory is stored in SQLite locally. When LangChain.js releases v2 or deprecates createReactAgent (which is already marked deprecated in favor of createAgent), the SQLite file doesn't care. Your memory survives the next migration.
What about contradictions?
The most common memory bug in LangChain apps: the agent accumulates conflicting facts.
Turn 3: "User prefers Python"
Turn 47: "Actually, I switched to TypeScript"
Both facts are in memory. A similarity search for "preferred language" might return either one. The agent gives inconsistent answers depending on which embedding scores higher.
This isn't hypothetical. The LOCOMO benchmark (Snap Research) tested exactly this scenario across multiple memory systems. All of them showed significant degradation on questions about facts that changed over time. The reason: vector similarity search has no concept of time. "Prefers Python" and "switched to TypeScript" are both semantically close to "preferred language," so the retriever returns both and the model flips a coin.
db0 handles this with superseding:
await harness.memory().write({
content: "User prefers TypeScript",
scope: "user",
supersedes: previousPreferenceId,
});
// "User prefers Python" is preserved for audit but excluded from search
The old fact doesn't disappear. It's kept for history. But memory_search only returns the current fact. No contradictions, deterministic behavior.
LangGraph Store doesn't have this primitive. Mem0 handles it via LLM decision on every write (adds latency and cost, and the decision itself is non-deterministic). db0 does it explicitly.
Chat message history (BufferMemory replacement)
If you want a drop-in for BufferMemory that also extracts facts, db0 provides a BaseChatMessageHistory implementation:
import { Db0ChatMessageHistory } from "@db0-ai/langchain";
const history = new Db0ChatMessageHistory({ harness: memory.harness });
await history.addUserMessage("I always use TypeScript with strict mode");
await history.addAIMessage("Got it! I'll remember that.");
// Facts are extracted automatically — "always use" is a signal word
// Stored as a user-scoped fact, searchable across all future sessions
Unlike BufferMemory, this persists to disk. Unlike RunnableWithMessageHistory, the API is three methods: addUserMessage, addAIMessage, getMessages.
For Python developers
@db0-ai/langchain targets LangChain.js. If you're using Python LangChain/LangGraph, the current production-stable options are:
- LangGraph PostgresSaver, if you're already in the LangGraph ecosystem
- Zep, for temporal knowledge graphs with automatic summarization
- LangMem, LangGraph-native with LLM-driven memory extraction (newer, less battle-tested than the other two)
A Python db0 SDK is on the roadmap but not yet available. I'd rather be honest about this than have you install the package and find no Python support.
The bottom line
The deprecation cycle will continue. LangChain's orchestration will keep evolving, and that's healthy. What's unhealthy is rebuilding your memory layer every time it does.
Decouple them. Store your agent's knowledge in a layer that doesn't know or care what version of LangChain you're running. When v2 ships, your upgrade is npm update @langchain/*, not "rewrite all memory code."
db0 is one way to do this. The architecture matters more than the specific tool.
db0 is open source (MIT). @db0-ai/langchain works with @langchain/core v1+ and @langchain/langgraph v1+.