CodeDocs Vault

Superset Repository Analysis

A multi-document analysis of the Superset codebase, written for someone learning the system from the ground up.

Superset is a desktop "code editor for AI agents" that orchestrates multiple CLI coding agents (Claude Code, Codex, Cursor, Gemini, …) in parallel using git worktrees as the isolation primitive. The desktop app is Electron; a deployable host-service runs the workspace/git/PTY/chat business logic; a Postgres-backed cloud surfaces tasks, integrations, billing, and a relay for cross-machine ops.

Reading order

Doc Topic
00-overview.md Purpose, problem, audience, tech stack, repo layout, 60-second mental model
01-architecture.md Five-process model, host-service boundaries, worktree isolation, IPC patterns, manifest adoption
02-execution-flow.md Entry points; workspace creation, terminal launch, agent dispatch, cloud→device flows
03-llm-integration.md Models, Vercel AI SDK + Mastracode, prompts, MCP tools, streaming, guardrails, hooks
04-cloud-and-data.md Web/API/DB/auth/sync/SDK/relay; Better Auth; Postgres + SQLite; Electric SQL + Durable Streams
05-design-patterns.md Recurring patterns, tradeoffs, the V1 → V2 narrative, lessons learned
06-key-files.md Map of essential files → responsibility; prompt index; "if you only read 15 files" list
07-lessons-for-swisscheese.md What a meta-harness for AI code review (Swisscheese) should lift, adapt, and skip from Superset

Methodology note

The analysis was produced by reading the code, plans, and architecture docs directly. Quotes are verbatim where possible (especially prompts) and file paths cite line numbers where they help. Where the source is a plan rather than implementation, the doc explicitly says "in flight" — be aware that those parts may have shifted by the time you read this.