CodeDocs Vault

Architecture

The shape and the seams. How the processes are arranged, who owns what, and how they talk.

This is the most important document in this analysis — it explains the load-bearing decision of the project: separating an Electron desktop shell from a deployable host-service, with git worktrees as the isolation primitive and a long-lived PTY daemon as the durability primitive.

Sources: apps/desktop/HOST_SERVICE_ARCHITECTURE.md, apps/desktop/HOST_SERVICE_BOUNDARIES.md, apps/desktop/HOST_SERVICE_LIFECYCLE.md, plus the code paths cited inline.


1. The Five Processes

flowchart TD
    subgraph Electron["Electron app (single binary)"]
        Main["Main process<br/>apps/desktop/src/main/<br/>• window/tray/auto-update<br/>• tRPC IPC router"]
        Preload["Preload<br/>apps/desktop/src/preload/<br/>contextBridge"]
        Renderer["Renderer (React)<br/>apps/desktop/src/renderer/<br/>TanStack Router + Zustand"]
        Main <--> Preload
        Preload <--> Renderer
    end
    subgraph PerOrg["Per-organization child processes"]
        Host["Host-Service (Hono)<br/>packages/host-service/<br/>• workspace CRUD<br/>• git<br/>• PTY supervisor<br/>• chat runtime<br/>• SQLite"]
        PTYD["PTY Daemon (Node)<br/>packages/pty-daemon/<br/>node-pty file descriptors"]
        Host <-->|"Unix socket<br/>NDJSON"| PTYD
    end
    Agent["Agent CLI (claude, codex, …)<br/>spawned by PTY daemon<br/>env=SUPERSET_*<br/>PATH=~/.superset/bin"]
    Renderer -->|"tRPC-electron<br/>(observables only)"| Main
    Renderer -->|"HTTP/WS tRPC<br/>(direct)"| Host
    Main -->|"spawn/adopt<br/>via manifest"| Host
    PTYD --> Agent
 
    Cloud["Cloud<br/>apps/api · apps/relay · electric-proxy"]
    Main <-->|"Better Auth"| Cloud
    Host <-->|"host registration · chat ·<br/>Electric SQL row sync"| Cloud

1.1 Main process — apps/desktop/src/main/

Owns:

1.2 Renderer — apps/desktop/src/renderer/

A standard React SPA, but with two clients to two different servers:

Routing is file-based via TanStack Router:

apps/desktop/src/renderer/routes/
├── __root.tsx
├── -layout.tsx              ← global error boundary
├── sign-in/
├── create-organization/
└── _authenticated/
    ├── layout.tsx           ← Collections provider, init effects
    └── _dashboard/
        ├── workspaces           ← v1 workspace list
        ├── v2-workspaces        ← v2 workspace list
        ├── projects
        ├── pending/{id}         ← workspace creation progress
        └── v2-workspace/{id}    ← active workspace (terminals + diffs)

State lives in many small Zustand stores:

1.3 Host-service — packages/host-service/

This is the most important non-obvious component. It's a standalone Hono HTTP server with tRPC endpoints that owns essentially all business logic:

It has zero Electron coupling. The contract (HOST_SERVICE_BOUNDARIES.md) makes this explicit:

// packages/host-service/src/app.ts
export function createApp(options: CreateAppOptions): CreateAppResult {
  const { config, providers } = options;
  // config:    { dbPath, cloudApiUrl, migrationsFolder, allowedOrigins }
  // providers: { auth, hostAuth, credentials, modelResolver }
}

What was removed from host-service to honour this boundary:

Why this matters: this code can be deployed to Docker / Lambda / Kubernetes / a remote Linux dev box and, once you give it the right providers, it will just work — including running the user's chat agents and PTYs on a remote machine. The renderer doesn't care if the host-service is on localhost or dev-box.us-east.example.com.

packages/host-service/src/serve.ts is the standalone entry point — reads env (ORGANIZATION_ID, PORT, HOST_SERVICE_SECRET, …), builds providers, calls createApp(), serves.

1.4 PTY daemon — packages/pty-daemon/

A long-lived Node process. Why Node, not Bun? node-pty's tty.ReadStream doesn't work under Bun (packages/pty-daemon/src/main.ts header). This is the only Node-only component.

Why a separate process at all (rather than libpty inside host-service)?

Since v0.5, the daemon is supervised by host-service, not by Electron — boot the host-service and it spawns the daemon if needed.

1.5 Cloud (sketched here, expanded in 04-cloud-and-data.md)


2. Worktrees: The Isolation Primitive

Workspaces ≡ git worktrees, plus metadata. The mental model:

my-project/                              ← main checkout
├── .git/
├── src/
└── .worktrees/                          ← Superset-managed
    ├── feature-add-login/               ← worktree 1 (branch=feature-add-login)
    ├── fix-flaky-test/                  ← worktree 2
    └── claude-experiment-2026-05-03/    ← worktree 3

Each worktree:

2.1 Three creation intents → one UX

apps/desktop/V2_WORKSPACE_CREATION.md codifies the model. The user can:

Intent When Git operation
Fork "New workspace" + prompt for branch name git worktree add -b <new> <base>
Checkout Pick an existing branch (local or remote) git worktree add --track -b <branch> origin/<branch>
Adopt Discover an orphan .worktrees/<branch>/ No git op; just register the cloud row

All three converge in apps/desktop/src/lib/trpc/routers/workspaces/procedures/create.ts → host-service workspaceCreation.create/checkout/adopt (packages/host-service/src/trpc/router/workspace-creation/).

Authority decision: the cloud's view of "is there a workspace for this branch?" wins over a host-stale local cache, because hosts come and go. The branch picker UI uses searchBranches server-side (substring + filter + cursor pagination, all in one procedure) to keep that view fresh.


3. IPC Patterns

There are four transport pairs in the system. All of them speak tRPC.

From To Transport Notes
Renderer Main trpc-electron over IPC Observables only for subscriptions. ~35 router groups under apps/desktop/src/lib/trpc/routers/.
Renderer Host-service tRPC over HTTP / WebSocket Direct connection using port + PSK secret obtained from main.
Main Host-service tRPC over HTTP For lifecycle ops (start/stop/restart).
Main PTY daemon (v1) Unix socket + NDJSON Legacy. v2 routes PTY through host-service.
Host-service Cloud API tRPC over HTTPS Host registration, chat, automations.
Host-service PTY daemon (v2) Unix socket + NDJSON The supervisor relationship.
Cloud (Electric) Desktop SQLite Electric SQL row sync Pushes agent_commands, device_presence, etc.
SDK/CLI Cloud HTTPS (tRPC) Direct.
SDK/CLI Device Cloud → Relay (WS) → Device For device-routed ops (e.g., create worktree).

3.1 Manifest-based adoption (the resilience trick)

Each running host-service writes ~/.superset/host/{orgId}/manifest.json:

{ "pid": 12345, "endpoint": "http://127.0.0.1:54321", "authToken": "psk-…", "startedAt": "2026-05-03T…", "organizationId": "org_…" }

On Electron startup, discoverAll() (see the host-service-coordinator at apps/desktop/src/main/lib/host-service-coordinator.ts) scans this directory, health-checks each manifest, and adopts healthy ones rather than spawning duplicates. On quit, the user (or app config) chooses:

This decouples agent uptime from app uptime — a meaningful UX win when an agent is mid-run and the user wants to restart Superset.


4. The Agent Wrapper / Hooks Layer

How does Superset know an agent has finished, emitted a diff, or wants attention — without modifying the agent's source code?

PATH rewriting + wrapper scripts. When a workspace terminal is launched, env is set:

~/.superset/bin/ contains shims for each agent (claude, codex, cursor, gemini, etc.) generated by apps/desktop/src/main/lib/agent-setup/desktop-agent-setup.ts. Each shim:

  1. Forwards args to the real agent binary.
  2. Injects the agent's own hooks config so it'll POST events to localhost:$SUPERSET_PORT on tool-use, completion, errors, etc.

For Claude Code specifically, the wrapper rewrites ~/.claude/settings.json to add a managed hook block:

[ -n "$SUPERSET_HOME_DIR" ] && [ -x "$SUPERSET_HOME_DIR/hooks/notify" ] && "$SUPERSET_HOME_DIR/hooks/notify" || true

(See apps/desktop/src/main/lib/agent-setup/agent-wrappers-claude-codex-opencode.ts:68-93.)

This is the mechanism by which Superset becomes a peer to the agent rather than a layer above it. The agent runs unmodified; the hooks ride along.


5. Database Strategy — Cloud + Local Mirrors

Two databases, distinct schemas:

Cloud — Postgres on Neon, Drizzle (packages/db/src/schema/)

Local — SQLite, Drizzle (packages/local-db/src/schema/)

The command queue is the only place where cloud → desktop sync is essential: an SDK call from a CI pipeline "create a worktree" lands in agent_commands, Electric syncs it down, the desktop runs it, writes status back, the cloud sees the row update.

For chat, the path is different — see 03-llm-integration.md (Durable Streams, event-sourced).


6. The V1 / V2 Coexistence

The codebase is mid-transition. Both generations live side by side:

V1 V2
Polling-based chat (4 fps, two sources) Event-sourced chat with monotonic seq
Terminal-host daemon (separate) PTY daemon supervised by host-service
Single-device assumption Multi-device (host = machine)
device_presence table v2_hosts, v2_users_hosts
projects, workspaces v2_projects, v2_workspaces
Workspace creation tightly coupled to renderer pendingWorkspaces Electric collection + /pending/{id} page

Both work today. There is no flag-flip cutover — a deliberate "parallel universes" strategy to avoid mid-flight risk. See 05-design-patterns.md for the strategic narrative.


7. Things To Beware (Pitfalls / Notes For Readers)