The Agentic Loop
The agentic loop is the core execution cycle that makes OpenClaw an agent rather than a simple chatbot. It's the mechanism by which the AI can reason, use tools, observe results, and iterate until a task is complete.
Overview
The loop follows this pattern:
User sends message
→ LLM receives: system prompt + conversation history + new message
→ LLM responds with text AND/OR tool_use blocks
→ If tool_use blocks present:
→ Execute each tool
→ Feed tool results back to LLM
→ LLM sees results and responds again (text and/or more tool_use)
→ REPEAT
→ If no tool_use blocks:
→ Response is final
→ Return to caller
This is an iterative loop, not a single call. The agent might call 10+ tools before producing a final answer. Each iteration is called a "turn."
Where the Loop Lives
The loop has two layers:
Outer Layer: OpenClaw (src/agents/pi-embedded-runner/run/attempt.ts)
OpenClaw owns setup, teardown, and event handling. The key line is:
// attempt.ts — the single await that drives the entire loop
await abortable(activeSession.prompt(effectivePrompt));This single await blocks until the agent is done — potentially after many tool calls. Everything above it is setup; everything below is cleanup.
Inner Layer: Pi Agent SDK (@mariozechner/pi-coding-agent)
The SDK's session.prompt() method contains the actual loop:
session.prompt(userMessage):
while true:
1. Assemble messages: system prompt + history + new user message + any tool results
2. Call LLM via streamFn (Anthropic API, Ollama, Gemini, etc.)
3. Stream response tokens
4. Parse response for tool_use blocks
5. If tool_use blocks found:
a. For each tool_use block:
- Look up tool by name in registered tools
- Execute tool function with parsed arguments
- Collect result (text, error, or structured data)
b. Append tool results to conversation history
c. Continue loop (go to step 1)
6. If no tool_use blocks:
- Agent is done
- Break loop
- Return
OpenClaw never sees this loop directly. It interacts with it through event subscriptions.
Event-Driven Architecture
While the loop runs, OpenClaw subscribes to events emitted by the SDK:
// attempt.ts
const sessionUnsubscribe = params.session.subscribe(
createEmbeddedPiSessionEventHandler(ctx)
);Event Types
| Event | When | What OpenClaw Does |
|---|---|---|
agent_start |
Loop begins | Log start, initialize state |
message_start |
LLM begins a new response | Reset text accumulation buffers |
message_update |
Streaming tokens arrive | Accumulate text, emit partial replies for UI/channel |
message_end |
LLM response complete | Record usage stats, finalize message |
tool_execution_start |
Tool call detected | Extract tool name/args, emit tool event for UI |
tool_execution_update |
Tool producing output | Stream partial tool output |
tool_execution_end |
Tool finished | Capture result, track success/failure |
auto_compaction_start |
Context too large | Log compaction, notify UI |
auto_compaction_end |
Compaction done | Resume with compressed context |
agent_end |
Loop complete | Finalize all state |
Event Handler Architecture
src/agents/pi-embedded-subscribe.handlers.ts ← Main dispatcher (switch on event.type)
├── pi-embedded-subscribe.handlers.messages.ts ← Text streaming handlers
├── pi-embedded-subscribe.handlers.tools.ts ← Tool execution handlers
└── pi-embedded-subscribe.ts ← State management & subscription setup
The dispatcher in handlers.ts:
export function createEmbeddedPiSessionEventHandler(ctx) {
return (evt) => {
switch (evt.type) {
case "tool_execution_start": handleToolExecutionStart(ctx, evt); return;
case "tool_execution_update": handleToolExecutionUpdate(ctx, evt); return;
case "tool_execution_end": handleToolExecutionEnd(ctx, evt); return;
case "message_start": handleMessageStart(ctx, evt); return;
case "message_update": handleMessageUpdate(ctx, evt); return;
case "message_end": handleMessageEnd(ctx, evt); return;
// ... agent_start, agent_end, compaction events
}
};
}State During the Loop
The subscription maintains mutable state throughout the loop:
const state: EmbeddedPiSubscribeState = {
assistantTexts: [], // Accumulated response text chunks
toolMetas: [], // Metadata for each tool call
toolMetaById: new Map(), // Tool metadata indexed by call ID
toolSummaryById: new Set(), // Tools already summarized (avoid duplicates)
lastToolError: undefined, // Last tool error for diagnostics
blockState: { // Tracks thinking/code block boundaries
thinking: false,
final: false,
inlineCode: createInlineCodeState()
},
};Tool Registration & Execution
How Tools Are Created
Before the loop starts, OpenClaw creates all available tools:
// attempt.ts
const toolsRaw = createOpenClawCodingTools({
exec: { elevated: params.bashElevated },
sandbox,
messageProvider: params.messageChannel,
agentAccountId: params.agentAccountId,
// ...
});createOpenClawCodingTools() (in src/agents/pi-tools.ts) assembles:
- Built-in tools: bash, read file, write file, glob, grep, web search, message, etc.
- Channel tools: Channel-specific tools (e.g., Discord guild actions)
- Plugin tools: Tools registered by extensions
- Skill tools: Tools defined in active skills
Each tool is a factory that receives runtime context:
type ToolFactory = (ctx: ToolContext) => AgentTool | AgentTool[] | null;How the SDK Executes Tools
When the LLM produces a tool_use block like:
{
"type": "tool_use",
"id": "toolu_abc123",
"name": "bash",
"input": { "command": "git status" }
}The SDK:
- Looks up
"bash"in the registered tool map - Calls the tool's execute function with the parsed
input - Wraps the result as a
tool_resultmessage - Appends it to conversation history
- Continues the loop
Tool Result Feedback
Tool results are inserted into the conversation as:
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_abc123",
"content": "On branch main\nnothing to commit, working tree clean"
}
]
}The LLM sees these results in its next iteration and can decide to call more tools or produce a final text response.
Loop Termination
The loop ends when any of these conditions is met:
| Condition | Mechanism |
|---|---|
| Natural completion | LLM returns text without any tool_use blocks |
| Timeout | Configurable timeout triggers abortRun(true) |
| External abort | User cancels or channel disconnects; abort signal fires |
| Compaction failure | Context window exceeded and compaction fails after retries |
| Fatal error | Provider error (auth, rate limit, billing) that can't be retried |
Timeout Handling
// attempt.ts — timeout setup
const timeoutMs = params.timeout ?? DEFAULT_TIMEOUT;
const timeoutHandle = setTimeout(() => {
abortRun(true); // true = timeout (not user cancel)
}, timeoutMs);Abort Mechanism
const runAbortController = new AbortController();
function abortRun(isTimeout?: boolean) {
runAbortController.abort();
// SDK detects abort signal and exits the loop gracefully
}User Injection During the Loop
Users can inject new messages while the agent is running:
const queueHandle: EmbeddedPiQueueHandle = {
queueMessage: async (text: string) => {
await activeSession.steer(text); // Injects into ongoing loop
},
isStreaming: () => activeSession.isStreaming,
abort: abortRun,
};session.steer(text) injects a user message into the conversation during the loop. The SDK will include it in the next LLM call, allowing users to course-correct the agent mid-execution.
Context Compaction
When the conversation history exceeds the model's context window, the SDK triggers auto-compaction:
auto_compaction_start
→ SDK summarizes older conversation turns
→ Replaces detailed history with compressed summary
→ Frees token budget for new turns
auto_compaction_end
→ Loop continues with compressed context
OpenClaw configures compaction parameters:
ensurePiCompactionReserveTokens(settings, {
floor: resolveCompactionReserveTokensFloor(params.model),
});Provider Abstraction
The streaming function is swappable:
if (params.model.api === "ollama") {
activeSession.agent.streamFn = createOllamaStreamFn(ollamaBaseUrl);
} else {
activeSession.agent.streamFn = streamSimple; // Anthropic API (default)
}
// Wrap with tracing
activeSession.agent.streamFn = cacheTrace.wrapStreamFn(streamFn);
activeSession.agent.streamFn = anthropicPayloadLogger.wrapStreamFn(streamFn);This lets the same loop work with Anthropic, Google Gemini, Ollama (local models), and other providers.
Diagram: Complete Loop Lifecycle
┌─────────────────────────────────────────────────────────────┐
│ runEmbeddedAttempt() │
│ │
│ [Setup Phase] │
│ ├─ Resolve workspace + sandbox │
│ ├─ Load skills + bootstrap files │
│ ├─ Create tools (bash, read, write, glob, grep, ...) │
│ ├─ Build system prompt (modular sections) │
│ ├─ Open SessionManager (load history) │
│ ├─ createAgentSession() from Pi SDK │
│ └─ Subscribe to session events │
│ │
│ [Execution Phase] │
│ await session.prompt(userMessage) │
│ ┌─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │
│ │ Pi SDK Inner Loop │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │ │
│ │ │ Call LLM │───►│ Parse │───►│ Has tool_use?│ │ │
│ │ │ (stream) │ │ Response │ └──────┬───────┘ │ │
│ │ └──────────┘ └──────────┘ yes │ │ no │ │
│ │ ▲ │ │ │ │
│ │ │ ▼ │ │ │
│ │ │ ┌──────────┐│ │ │
│ │ │ │ Execute ││ │ │
│ │ │ │ Tools ││ │ │
│ │ │ └────┬─────┘│ │ │
│ │ │ │ │ │ │
│ │ │ tool results │ │ │ │
│ │ └───────────────────────────────┘ │ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ [Return] │ │
│ └─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │
│ │
│ [Teardown Phase] │
│ ├─ Collect messages, usage, errors │
│ ├─ Release session write lock │
│ └─ Return EmbeddedRunAttemptResult │
└─────────────────────────────────────────────────────────────┘