The maximum amount of text an AI model can process at once
The context window is the maximum amount of text (measured in tokens) that an AI model can process in a single request. Everything in the context window — your conversation history, system prompt, tool outputs, and SOUL.md — must fit within this limit for the model to respond.
Modern LLMs like Claude have context windows measured in thousands to hundreds of thousands of tokens (roughly 4 characters per token). As your conversation with OpenClaw grows longer, the accumulated context approaches this limit. OpenClaw uses context compaction to handle this, but the context window size affects how much history the agent can actively use at once.
Context window size determines how much an agent can 'hold in mind' at once. Larger context windows enable more complex, multi-step tasks without compaction. Model selection in Clawfleet lets you choose models with larger context windows for demanding tasks.
Clawfleet manages your OpenClaw instance — Context Window, backups, restarts, and cost tracking — all included. Start for $1.
Deploy for $1 →