Memory scopes for AI agents
Why flat memory breaks down, how scope hierarchies prevent context pollution, and how to design a scoping model for AI agent memory systems.
Memory scopes for AI agents
Every AI agent memory system starts simple: store facts, retrieve facts. A flat list. It works fine with 10 facts. It falls apart at 1,000.
The problem isn't storage — it's relevance. When every fact competes equally for attention, the agent wastes tokens on information that doesn't matter for the current task. Worse, it can leak context between users, between sessions, or between unrelated tasks.
Memory scopes solve this by organizing facts into hierarchical layers with defined lifetimes and visibility rules. The right fact surfaces at the right time, and irrelevant facts stay out of the way.
Why flat memory breaks down
Consider an AI coding agent with flat memory — no scopes, just a bag of facts:
- User prefers TypeScript over JavaScript
- Current task: refactor the auth middleware
- The team uses ESLint with the Airbnb config
- Yesterday's debugging session found a race condition in the queue
- User's timezone is PST
- Scratch calculation: 3 retries × 500ms = 1.5s max backoff
When the agent starts a new task, it retrieves the top-N most relevant facts. But "relevant" is ambiguous without scopes:
- The scratch calculation from yesterday? Irrelevant to today's task, but it's still in the pool.
- The race condition finding? Maybe relevant, maybe not — depends on whether the new task touches the queue.
- The user's timezone? Rarely relevant for coding, but it's competing for retrieval slots.
At scale, this creates three problems:
1. Context pollution
Every retrieval includes noise — facts from old sessions, other tasks, and unrelated contexts. The agent spends tokens processing information it doesn't need, reducing the budget available for actually relevant facts.
2. Information leakage
In multi-user systems, flat memory means User A's preferences might surface when User B asks a question. Even in single-user systems, leaking context between unrelated tasks can cause the agent to make incorrect assumptions.
3. Stale accumulation
Without defined lifetimes, facts accumulate forever. Scratch notes from months-old debugging sessions sit alongside current preferences. The agent can't distinguish between "this fact is permanently important" and "this fact was useful for 5 minutes."
What scopes are
A memory scope defines two things about a fact:
- Lifetime: When should this fact be created and when should it expire?
- Visibility: Which contexts should be able to see this fact?
Together, these properties create a hierarchy where facts are automatically available in the right contexts and invisible in the wrong ones.
Common scope models
Two-scope: session + persistent
The simplest useful model. Facts are either:
- Session: Available only during the current conversation. Discarded when the session ends.
- Persistent: Available across all sessions. Stored permanently.
This is better than no scopes, but it's too coarse. Everything persistent competes equally — user preferences, agent-wide patterns, and project-specific knowledge are all in one bucket.
Three-scope: session + user + global
Adds a distinction between user-specific and shared knowledge:
- Session: Current conversation only.
- User: This user across all sessions.
- Global: All users, all sessions.
Better, but still missing task-level isolation. If an agent is working on two tasks in one session, their scratch notes bleed together.
Four-scope: task + session + user + agent
This is the model db0 uses. It covers the full range of fact lifetimes:
| Scope | Lifetime | Visibility | Example |
|---|---|---|---|
| Task | Current task | Only the active task | Intermediate calculations, scratch work, partial results |
| Session | Current session | All tasks in this session | Conversation context, in-progress decisions, temporary preferences |
| User | Permanent | All sessions for this user | Preferences, decisions, personal context, learned patterns |
| Agent | Permanent | All users, all sessions | Agent-wide knowledge, capability documentation, global patterns |
The visibility hierarchy
Scopes form a hierarchy. When the agent retrieves facts, it sees everything at its current scope level and above:
Agent (global — always visible)
└─ User (visible in all sessions for this user)
└─ Session (visible in current session)
└─ Task (visible in current task only)
A task-scoped retrieval sees: task facts + session facts + user facts + agent facts. A user-scoped retrieval sees: user facts + agent facts. An agent-scoped retrieval sees: only agent facts.
This means:
- Task scratch notes never leak into other tasks
- User preferences are always available, regardless of the current task
- Agent-wide knowledge is universally accessible
- No explicit filtering needed — the scope hierarchy handles it
Designing scope boundaries
The hard part isn't implementing scopes — it's deciding which scope each fact belongs to. Here are the heuristics:
Task scope: "Will this matter after the task is done?"
If no, it's task-scoped. Intermediate calculations, partial results, step-by-step progress tracking, and scratch notes all belong here. When the task completes, these facts are discarded.
The test: would this fact be noise in a different task? If yes, scope it to the task.
Session scope: "Will this matter tomorrow?"
If no, it's session-scoped. Conversation context ("we were just talking about the auth module"), temporary preferences ("let's use verbose logging for now"), and in-progress decisions that haven't been finalized.
The test: if the user starts a new session tomorrow, should this fact be in their context? If not, it's session-scoped.
User scope: "Is this specific to one person?"
If yes, it's user-scoped. Preferences, personal context, past decisions, and relationship history. These facts persist indefinitely and are always available when that user interacts with the agent.
The test: would this fact be wrong or irrelevant for a different user? If yes, it's user-scoped.
Agent scope: "Is this universally true?"
Agent-scoped facts are rare. They represent knowledge that's true regardless of who's interacting with the agent — documentation about the agent's own capabilities, global configuration, or patterns learned from many users.
The test: would this fact be useful for any user in any session? If yes, it's agent-scoped.
Practical examples
Coding agent
Agent: "This project uses TypeScript 5.3 with strict mode"
User: "User prefers functional style over classes"
Session: "We're refactoring the payment module today"
Task: "Current file: src/payments/stripe.ts, line 142"
When the agent generates code, it sees all four levels. It knows the project config (agent), the user's style preferences (user), what they're working on (session), and exactly where they are in the code (task).
Customer support agent
Agent: "Return policy: 30 days, receipt required"
User: "Customer has been a member since 2019, gold tier"
Session: "Customer is calling about order #8847"
Task: "Looking up shipping status for item 3 of 5"
The agent knows company policies (agent), the customer's history (user), the current issue (session), and the specific sub-task (task). When the task completes (shipping status found), the task facts are cleaned up. The session facts persist until the call ends. The user facts persist forever.
Multi-agent system
Scopes become even more important when sub-agents are involved:
Parent agent creates a task → spawns child agent
Child agent gets:
✓ Its own task scope (isolated scratch space)
✓ Parent's session scope (shared conversation context)
✓ User scope (shared user preferences)
✓ Agent scope (shared global knowledge)
✗ Parent's task scope (isolated — parent's scratch notes aren't shared)
This means child agents can share context without inheriting irrelevant scratch work from the parent. The scope hierarchy handles isolation automatically.
Fact superseding within scopes
Scopes interact with fact superseding — the mechanism that handles changing knowledge. When a new fact supersedes an old one, both facts must be in the same scope.
If a user changes their preference from "dark mode" to "light mode," the new user-scoped fact supersedes the old user-scoped fact. The old fact is marked as outdated and excluded from retrieval.
This prevents a common bug: a session-scoped temporary preference accidentally superseding a permanent user preference. Because superseding is scope-aware, temporary overrides stay temporary.
Implementation considerations
Scope metadata
Each fact needs scope metadata stored alongside its content:
{
content: "User prefers dark mode",
scope: "user",
userId: "user_123",
sessionId: null, // not session-scoped
taskId: null, // not task-scoped
status: "active",
supersededBy: null
}
Retrieval filtering
When retrieving facts, filter by scope visibility before running semantic search. This is cheaper than retrieving all facts and filtering afterward — especially at scale.
Cleanup
Task and session scopes need cleanup mechanisms:
- Task cleanup: When a task completes, mark all task-scoped facts as expired. Optionally, promote important task facts to session or user scope.
- Session cleanup: When a session ends, discard session-scoped facts. Optionally, extract and promote key decisions to user scope.
The "optionally promote" step is important. Sometimes a task produces knowledge that should outlive the task — "we decided to use the repository pattern for data access." That's a user-scoped decision, even though it was discovered during a task.
Key takeaways
- Flat memory doesn't scale. Without scopes, every fact competes equally, leading to context pollution, information leakage, and stale accumulation.
- Four scopes cover the full range: task (ephemeral), session (conversation), user (personal), agent (universal).
- Scopes form a visibility hierarchy. Tasks see everything. Agents see only global knowledge. No explicit filtering needed.
- Scope assignment is a design decision. Use the lifetime heuristic: will this matter after the task? After the session? For this user only? For everyone?
- Superseding must be scope-aware. Temporary overrides shouldn't accidentally replace permanent facts.
Further reading
- What is AI agent memory? — The foundational guide to how agent memory systems work.
- How Claude Code memory works — A real-world example of scoped memory in a coding agent.
- db0 documentation — API reference for db0's four-scope memory system.