Entry Points & Execution Flow
Host Process Entry Point
File: src/index.ts:549-747
The single entry point is the main() function, guarded by a direct-run check:
// src/index.ts:737-747
const isDirectRun =
process.argv[1] &&
new URL(import.meta.url).pathname ===
new URL(`file://${process.argv[1]}`).pathname;
if (isDirectRun) {
main().catch((err) => {
logger.error({ err }, 'Failed to start NanoClaw');
process.exit(1);
});
}This guard prevents main() from running during test imports — a clean pattern for testable entry points.
Startup Sequence
main()
│
├── 1. ensureContainerSystemRunning() # Verify Docker/Apple Container is up
│ └── ensureContainerRuntimeRunning() # src/container-runtime.ts
│ └── cleanupOrphans() # Kill leftover nanoclaw-* containers
│
├── 2. initDatabase() # Create SQLite schema + migrations
│ └── createSchema() # src/db.ts:17-149
│ └── migrateJsonState() # Legacy JSON → SQLite migration
│
├── 3. loadState() # Load in-memory state from SQLite
│ ├── lastTimestamp # Global message cursor
│ ├── lastAgentTimestamp # Per-group cursors (JSON blob)
│ ├── sessions # Per-group session IDs
│ └── registeredGroups # Group registrations
│
├── 4. ensureOneCLIAgent() for all groups # Recover from missed credential creates
│
├── 5. restoreRemoteControl() # Resume remote control session if active
│
├── 6. Register shutdown handlers # SIGTERM/SIGINT → graceful queue drain
│
├── 7. Initialize channels # For each registered channel:
│ ├── factory(channelOpts) # Create channel with callbacks
│ └── channel.connect() # Connect to platform
│
├── 8. startSchedulerLoop() # Begin polling for due tasks
│
├── 9. startIpcWatcher() # Begin polling IPC directories
│
├── 10. queue.setProcessMessagesFn() # Wire up message processing
│
├── 11. recoverPendingMessages() # Check for crash-orphaned messages
│
└── 12. startMessageLoop() # Enter infinite polling loop
Channel Initialization (src/index.ts:615-674)
Channels self-register via a barrel import (import './channels/index.js' at line 17). Each channel module calls registerChannel() at import time. At startup, main() iterates through registered channel names, calls the factory, and skips channels with missing credentials:
// src/index.ts:658-670
for (const channelName of getRegisteredChannelNames()) {
const factory = getChannelFactory(channelName)!;
const channel = factory(channelOpts);
if (!channel) {
logger.warn({ channel: channelName }, 'Channel installed but credentials missing — skipping.');
continue;
}
channels.push(channel);
await channel.connect();
}The channelOpts object (lines 616-653) wires three callbacks that all channels share:
onMessage— Store incoming messages, with sender allowlist filteringonChatMetadata— Track chat names and activity timestampsregisteredGroups— Closure returning current group registrations
Container Entry Point
File: container/agent-runner/src/index.ts:518-629
The container process reads JSON from stdin, runs the Claude Agent SDK, and writes JSON to stdout.
Container Startup Sequence
main()
│
├── 1. readStdin() # Read full JSON from stdin until EOF
│ └── Parse ContainerInput # {prompt, sessionId, groupFolder, chatJid, isMain, ...}
│
├── 2. Set up SDK environment # Credentials arrive via ANTHROPIC_BASE_URL proxy
│
├── 3. Clean stale _close sentinel # Leftover from previous container runs
│
├── 4. Build initial prompt
│ ├── Prefix [SCHEDULED TASK] if applicable
│ └── Drain pending IPC input messages
│
├── 5. Script phase (scheduled tasks only)
│ ├── runScript(script) # Execute bash, 30s timeout
│ ├── Parse JSON output # { wakeAgent: bool, data?: any }
│ └── Exit early if wakeAgent=false
│
└── 6. Query loop (infinite)
├── runQuery() # SDK query() with MessageStream
│ ├── Poll IPC for follow-up messages during query
│ ├── Stream results via writeOutput()
│ └── Return {newSessionId, lastAssistantUuid, closedDuringQuery}
│
├── If _close sentinel → break
│
├── Emit session update marker
│
└── waitForIpcMessage() # Block until next message or _close
├── New message → loop back to runQuery()
└── _close → break and exit
The Query Loop Pattern (container/agent-runner/src/index.ts:579-615)
This is one of the most interesting patterns in the codebase. The container doesn't exit after answering — it stays alive waiting for follow-up messages:
// container/agent-runner/src/index.ts:582-615
while (true) {
const queryResult = await runQuery(prompt, sessionId, ...);
if (queryResult.closedDuringQuery) break; // Host sent _close
// Emit session marker so host can track session ID
writeOutput({ status: 'success', result: null, newSessionId: sessionId });
// Block until next IPC message or _close sentinel
const nextMessage = await waitForIpcMessage();
if (nextMessage === null) break; // _close sentinel
prompt = nextMessage; // Use new message as next query prompt
}This means a single container can handle an entire multi-turn conversation. The host decides when to close it via idle timeout.
Message Processing Pipeline
Phase 1: Message Loop (src/index.ts:419-520)
The message loop polls SQLite every 2 seconds for new messages across all registered groups:
getNewMessages(registeredJids, lastTimestamp)
│
▼
Group by chat_jid
│
▼
For each group:
├── Check trigger requirement (@Andy prefix)
├── Check sender allowlist
│
├── If active container exists:
│ └── queue.sendMessage() → write IPC file → piped into running query
│
└── If no active container:
└── queue.enqueueMessageCheck() → spawns new container
Phase 2: Group Processing (src/index.ts:219-335)
When a new container is needed, processGroupMessages() runs:
- Fetch all messages since the last agent response (cursor recovery)
- Check trigger pattern (non-main groups only)
- Format messages as XML
- Advance cursor optimistically (roll back on error)
- Call
runAgent()→runContainerAgent() - Stream output back to channel via callback
Phase 3: Container Execution (src/container-runner.ts:277-671)
spawn(docker/container, args, {stdio: ['pipe', 'pipe', 'pipe']})
│
├── stdin.write(JSON.stringify(input))
├── stdin.end()
│
├── stdout.on('data') → parse OUTPUT_START/END markers → onOutput callback
│ │
│ ▼
│ channel.sendMessage()
│
├── stderr.on('data') → debug logging (no timeout reset)
│
└── container.on('close') → resolve promise → cleanup
Scheduled Task Execution
File: src/task-scheduler.ts:245-279
The scheduler polls SQLite every 60 seconds for due tasks:
startSchedulerLoop()
│
└── Every 60s:
├── getDueTasks() # WHERE next_run <= now AND status = 'active'
│
└── For each task:
├── Re-check status (may have been paused/cancelled)
├── queue.enqueueTask(groupJid, taskId, fn)
│
└── fn = async () => {
runTask() # src/task-scheduler.ts:78-241
├── runContainerAgent() # Full container with session
├── updateTaskAfterRun() # Compute next_run, log result
└── logTaskRun() # Duration, status, result to DB
}
IPC Event Processing
File: src/ipc.ts:30-155
The IPC watcher polls every 1 second for JSON files written by container agents:
startIpcWatcher()
│
└── Every 1s, for each group's IPC directory:
│
├── /ipc/{group}/messages/*.json → route to channel.sendMessage()
│
└── /ipc/{group}/tasks/*.json → processTaskIpc()
├── schedule_task → createTask() + computeNextRun()
├── pause_task → updateTask(status: 'paused')
├── resume_task → updateTask(status: 'active')
├── cancel_task → deleteTask()
├── update_task → updateTask(partial fields)
├── refresh_groups → syncGroups() + writeGroupsSnapshot()
└── register_group → registerGroup() (main only)
Graceful Shutdown
SIGTERM/SIGINT received
│
├── queue.shutdown(10000)
│ └── Set shuttingDown = true
│ └── Log active containers (but don't kill — they self-terminate via idle/timeout)
│
├── channel.disconnect() for all channels
│
└── process.exit(0)
Containers are not killed on shutdown — they're left to finish naturally via idle timeout or container timeout. This prevents work loss during WhatsApp reconnection restarts.