OpenClaw agents can dream now
Claude Code just shipped a feature called auto-dream. The idea: between sessions, a sub-agent reviews your accumulated memory files, merges duplicates, resolves contradictions, and converts stale relative dates to absolute ones. Your agent's memory gets cleaned up while you sleep. The metaphor comes from REM sleep consolidating short-term memory into long-term storage.
It's a good idea. The execution has problems.
The /dream command returns "Unknown skill" (#38461, #39135). The background trigger requires both 24 hours and 5 sessions to pass before it fires, and even then it's gated behind a server-side flag (tengu_onyx_plover) that most users don't have enabled. When it does run, there's no log of what changed. Your MEMORY.md is different this morning than it was last night, and you can't see why.
db0 v0.3.0 ships the same capability today, for OpenClaw and every other integration. With an audit trail.
What memory consolidation fixes
Run any agent long enough and its memory becomes a junk drawer. After three weeks of active use, you'll find entries like:
- "User prefers TypeScript"
- "User always uses strict mode"
- "User likes functional style over classes"
- "Project uses TypeScript 5.3 with strict mode enabled"
Four memories that should be one. They compete for retrieval slots. They waste token budget in context().pack(). Search for "coding preferences" and you get five near-identical results instead of five different facts.
Claude Code's auto-dream runs a four-phase cycle to fix this: orient (read the memory directory), gather signal (scan logs and transcripts), consolidate (merge and rewrite), prune (update the index). It runs the full thing through an LLM sub-agent.
db0's approach is different. Two phases, not four. The first phase is deterministic: exact duplicates and near-duplicates get cleaned up algorithmically, no LLM. The second phase clusters remaining memories by embedding similarity and sends only those clusters to an LLM for rewriting. Most redundancy is caught in phase one. You pay for LLM calls only on the hard cases.
Before and after
Before:
Memory #412: "User prefers TypeScript" (scope: user)
Memory #523: "User always uses strict mode" (scope: user)
Memory #601: "User likes functional style" (scope: user)
After:
Memory #847: "User prefers TypeScript with strict mode
and functional style" (scope: user)
mergedFrom: [#412, #523, #601]
consolidatedAt: 2026-03-30T14:22:00Z
Memory #412: superseded by #847
Memory #523: superseded by #847
Memory #601: superseded by #847
The originals don't disappear. They're marked as superseded and excluded from search, but they're still in the database. You can trace exactly which facts were merged into what, and when.
The transparency problem with auto-dream
GitHub issue #38493 flags three gaps in auto-dream: identity (it sometimes writes wrong project names into memory), accuracy (it makes unverified claims), and transparency (no logs of what it changed, created, or removed).
That third one is the one I care about most.
When db0 consolidates, every merged memory records its mergedFrom IDs and a consolidatedAt timestamp. The originals sit in the database with superseded status. If a merge was wrong, you find the originals, restore them, adjust the clustering threshold. The mistake is recoverable.
When auto-dream consolidates, the originals are gone. Your MEMORY.md changed overnight, and there's no diff, no log, no provenance chain. For a coding agent that's been running for months, this is the difference between infrastructure you can debug and a black box you have to trust.
Setup in OpenClaw
Add a consolidateFn to your db0 config. OpenClaw users usually have a Gemini key already:
import { GoogleGenerativeAI } from "@google/generative-ai";
const genai = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genai.getGenerativeModel({ model: "gemini-2.0-flash" });
db0({
consolidateFn: async (memories: string[]) => {
const prompt = `Merge these related facts into one concise statement
that preserves all information:\n\n${memories.map((m, i) => `${i + 1}. ${m}`).join("\n")}`;
const result = await model.generateContent(prompt);
return { content: result.response.text() };
},
consolidation: {
clusterThreshold: 0.82,
minClusterSize: 2,
maxClustersPerRun: 10,
},
});
Without consolidateFn, nothing changes. Zero breaking changes.
Consolidation runs inside reconcile(), which the db0 OpenClaw plugin already calls in afterTurn. No new lifecycle hooks. No separate background process. No waiting for 24 hours and 5 sessions to pass.
The profile controls let you tune how aggressive it is. Lower clusterThreshold (0.75) merges more liberally. Higher (0.90) only touches near-identical facts. maxClustersPerRun caps your LLM spend per cycle.
How it compares
| db0 consolidation | Claude Code auto-dream | |
|---|---|---|
| Status | Shipping, v0.3.0 | /dream returns "Unknown skill" |
| Audit trail | mergedFrom IDs, originals preserved | No change log |
| Trigger | Every reconcile cycle, configurable | 24h + 5 sessions, server-gated |
| Cost model | Deterministic dedup first, LLM for clusters only | Full LLM sub-agent every run |
| Frameworks | OpenClaw, Claude Code, AI SDK, LangChain, Pi | Claude Code only |
| Configurable | Threshold, cluster size, max per run | Not user-configurable |
One thing auto-dream does that db0 doesn't: it scans past session transcripts for facts that were never extracted into memory. db0 consolidation only operates on facts already stored. If something was said in conversation but never written down, consolidation won't find it. That's a real advantage for auto-dream, and it's on our roadmap.
The architecture decision
reconcile() already ran three steps in the db0 lifecycle: promote, merge, clean. Consolidation is step 2b between merge and clean. Because it lives inside the existing lifecycle, every integration gets it automatically. OpenClaw's afterTurn, the AI SDK middleware, LangChain's chat history adapter, they all call reconcile(). No integration code changed to ship this.
The design doc has the full rationale.
What's next
Consolidation is one part of v0.3.0. Coming up:
- BM25 keyword search to complete multi-strategy retrieval
- Bi-temporal model for temporal reasoning about facts
- Entity graph extraction for relationship-based queries
- Full LongMemEval benchmark validation at scale
To try it:
npm install @db0-ai/openclaw@0.3.0
Add a consolidateFn. Run your agent for a few sessions. The memory count goes down. The information density goes up.
db0 is open source (MIT). Issue #15 has the design discussion. PR #17 has the implementation.