How OpenClaw handles context window limits in long-running sessions
Context compaction is OpenClaw's mechanism for managing long conversations that would otherwise exceed the LLM's context window limit. When a session grows too long, OpenClaw automatically summarizes older parts of the conversation and replaces them with a compact summary, preserving essential information while freeing up context space.
When the accumulated context (conversation history + tools + memory) approaches the model's maximum context length, OpenClaw triggers compaction. It asks the LLM to summarize the conversation so far, retaining key decisions, facts, and task state. The full history is replaced with this summary, and the conversation continues. The original can be stored in MEMORY.md for later reference.
Without compaction, long-running agents eventually hit the context limit and either fail or start 'forgetting' earlier context. Compaction enables indefinitely long sessions — critical for agents running continuous workflows, research tasks, or ongoing projects. Clawfleet monitors compaction events to ensure your agent never silently loses important context.
Clawfleet manages your OpenClaw instance — Context Compaction, backups, restarts, and cost tracking — all included. Start for $1.
Deploy for $1 →