Multica Repository Analysis: Design Patterns & Decisions
Pattern Catalog
1. Strategy Pattern -- Agent Backends
Location: server/pkg/agent/agent.go:15-126
The Backend interface abstracts away 10 different agent CLI implementations behind a single Execute(ctx, prompt, opts) method. Each provider (Claude, Codex, Copilot, etc.) implements the same streaming protocol:
type Backend interface {
Execute(ctx context.Context, prompt string, opts ExecOptions) (*Session, error)
}The factory function New(agentType, cfg) at :97-126 selects the concrete implementation. This is textbook Strategy pattern -- the daemon doesn't know or care which agent it's running.
Why it's clever: Each backend handles vastly different CLI protocols (stream-json for Claude, ACP for Hermes/Kimi, JSON for Copilot) but all emit the same Message and Result types. The daemon code is completely agent-agnostic.
2. Singleton + Proxy Pattern -- Core Services
Location: packages/core/api/index.ts, packages/core/auth/index.ts
Core singletons (ApiClient, AuthStore, ChatStore) use a module-level variable + Proxy fallback pattern:
// packages/core/api/index.ts
let instance: ApiClient;
export function setApiInstance(api: ApiClient) { instance = api; }
export function getApi(): ApiClient { return instance; }
// Proxy ensures late-binding works during HMR
export const api = new Proxy({} as ApiClient, {
get: (_, prop) => (getApi() as any)[prop]
});Why: This solves two problems simultaneously:
- Singletons that survive Vite HMR (module-level state persists across hot reloads)
- Late initialization -- stores and hooks can reference
apiat import time beforeCoreProvidercreates the actual instance
The same pattern is used for auth (registerAuthStore) and chat stores.
3. Cache-as-Truth + WS Invalidation
Location: packages/core/query-client.ts, packages/core/realtime/use-realtime-sync.ts
The React Query client is configured with staleTime: Infinity -- data is fresh forever until explicitly invalidated. WebSocket events trigger targeted cache invalidation:
Server Event -> WebSocket -> useRealtimeSync -> queryClient.invalidateQueries([key])
This replaces the common patterns of:
- Polling (wasteful)
- Manual cache updates (error-prone, duplicates server logic)
- Global refetch on reconnect (too aggressive)
Tradeoff: More complex event handler setup, but the data is always consistent with the server. The refetchOnReconnect: true setting provides a safety net for missed events during disconnections.
4. Platform Bridge / Adapter Pattern
Location: packages/core/platform/, apps/*/platform/
The shared packages need routing and storage without depending on Next.js or Electron APIs. The solution is an adapter interface:
// packages/core/navigation/types.ts
interface NavigationAdapter {
push(path: string): void;
replace(path: string): void;
back(): void;
pathname: string;
searchParams: URLSearchParams;
}Each app provides its own implementation:
- Web:
apps/web/platform/navigation.tsxwrapsuseRouter()fromnext/navigation - Desktop:
apps/desktop/src/renderer/src/platform/navigation.tsxwrapsreact-router-dom+ tab management + overlay interception
The StorageAdapter interface similarly abstracts localStorage for web and electron-store for desktop.
5. Observer Pattern -- Event Bus
Location: server/internal/events/bus.go:1-89
The event bus is a classic observer/pub-sub with two levels:
- Type-specific handlers:
bus.Subscribe("issue:created", handler) - Global handlers:
bus.SubscribeAll(handler)
Panic recovery per handler (:68-76) is crucial -- a bug in one listener (e.g., notifications) shouldn't prevent others (e.g., realtime broadcast) from executing.
Design choice: Synchronous dispatch. This means handler order matters (subscribers must run before notifications at :93-97), but it also means no message loss and simpler reasoning about consistency.
6. Middleware Chain Pattern
Location: server/cmd/server/router.go:94-111, server/internal/middleware/
Chi middleware composes as nested handlers. The auth middleware (middleware/auth.go:18-98) demonstrates priority-based token resolution:
- Check
Authorizationheader for PAT (mul_*prefix) -> hash lookup in DB - Check
Authorizationheader for JWT -> HMAC validation - Check
multica_authHttpOnly cookie -> JWT validation - If cookie auth: require CSRF token for state-changing methods
Interesting detail: The daemon has its own auth middleware (daemon_auth.go:45-120) that validates mdt_* daemon tokens but falls back to PAT/JWT for backward compatibility.
7. Workspace-Aware Storage Pattern
Location: packages/core/platform/workspace-storage.ts:98-107
Zustand stores that persist to localStorage are namespaced per workspace:
function createWorkspaceAwareStorage(): PersistStorage {
// Keys become: `multica:${currentSlug}:${storeName}`
// Workspace switch triggers rehydration of all registered stores
}This prevents state leakage between workspaces (e.g., issue filters from workspace A appearing in workspace B) while still persisting user preferences per-workspace.
Rehydration (registerForWorkspaceRehydration): When the user switches workspaces, all registered stores rehydrate from the new workspace's storage namespace.
8. Provider Composition Pattern
Location: packages/core/platform/core-provider.tsx
The CoreProvider initializes all core singletons and wraps children in a specific provider order:
CoreProvider
└── QueryProvider (TanStack React Query)
└── AuthInitializer (hydrates auth state)
└── WSProvider (WebSocket connection, depends on auth)
└── {children}
Initialization guard (:29): if (initialized) return; prevents re-creation on HMR, which would break singleton references.
9. Command Pattern -- CLI Commands
Location: server/cmd/multica/, server/internal/cli/
Each CLI command is a Cobra command with its own handler function. The daemon uses the same CLI binary for both interactive use and automated agent operations (multica issue get, multica issue comment add, etc.).
Dual-use design: The same CLI binary that humans use to manage their workspace is also the tool agents use to interact with the platform. This means agents have the same capabilities as humans -- no separate "agent API".
10. Optimistic Mutation Pattern
Location: packages/core/issues/mutations.ts, throughout packages/core/*/mutations.ts
From CLAUDE.md: "Mutations are optimistic by default. Apply the change locally, send the request, roll back on failure, invalidate on settle."
The pattern:
onMutate: Update React Query cache optimistically- Send API request
onError: Roll back to snapshotonSettled: Invalidate cache (WS event may have already done this)
This gives instant UI feedback while maintaining consistency.
Notable Design Tradeoffs
1. Synchronous Event Bus vs. Message Queue
Choice: In-process synchronous dispatch instead of async message queue.
Upside: No message loss, simple ordering guarantees, no infrastructure dependency. Downside: A slow listener blocks all subsequent listeners. Mitigated by keeping listeners fast (they broadcast to WS or write to DB, not compute-heavy).
2. Agent CLIs as Subprocesses vs. API Calls
Choice: Spawn agent CLIs as child processes instead of calling LLM APIs directly.
Upside: Leverage each agent's full tool ecosystem (Claude Code's tool use, Codex's sandbox, etc.), support 10 providers without 10 API integrations, agents run with user's own credentials. Downside: Requires agent CLIs to be installed on the daemon host, output parsing varies per provider, harder to test.
3. Internal Packages (Raw TS) vs. Pre-compiled Packages
Choice: Shared packages export raw .ts/.tsx files, no build step.
Upside: Zero-config HMR, instant go-to-definition, no stale build artifacts. Downside: Every consuming app must be able to compile TypeScript (not a problem with Next.js and Vite), slower initial builds.
4. URL-Driven Workspace vs. Store-Driven Workspace
Choice: The workspace slug comes from the URL ([workspaceSlug] param), not from a persistent store.
Upside: Deep-linkable, browser back/forward works, no state sync issues. Downside: Extra complexity in desktop app (tabs have their own routers, cross-workspace navigation must be intercepted).
5. sqlc vs. ORM
Choice: Hand-written SQL + code generation instead of GORM/Ent.
Upside: Full PostgreSQL feature access, predictable queries, type-safe without runtime reflection.
Downside: More SQL to write, schema changes require migration + query updates + make sqlc.
What's Unusual or Clever
Agent Loop Prevention (execenv/runtime_config.go:161-162, 201-213)
The system has explicit guardrails against infinite agent-to-agent loops. When an agent replies to a comment from another agent:
- The prompt explicitly warns: "If that comment was an acknowledgment, thanks, or sign-off... do NOT reply -- silence is the preferred way to end agent-to-agent threads" (
daemon/prompt.go:52) - The meta skill content has an entire "When NOT to use a mention link" section explaining that
@mentiontriggers re-run - Rule: "If you are unsure whether a mention is warranted, don't mention. Silence ends conversations;
@restarts them."
Blocked Args Security (server/pkg/agent/claude.go:386-533)
The daemon prevents user-configured custom_args from overriding protocol-critical flags:
var claudeBlockedArgs = map[string]blockedArgMode{
"-p": blockedStandalone, // non-interactive mode
"--output-format": blockedWithValue, // stream-json protocol
"--input-format": blockedWithValue, // stream-json protocol
"--permission-mode": blockedWithValue, // bypassPermissions
"--mcp-config": blockedWithValue, // set by daemon
}This is a defense-in-depth measure -- workspace members can configure agent args but can't break the daemon's communication protocol.
Filtered Child Environment (server/pkg/agent/claude.go:483-487)
When spawning agent CLIs, the daemon strips Claude Code-specific env vars:
func isFilteredChildEnvKey(key string) bool {
return key == "CLAUDECODE" ||
strings.HasPrefix(key, "CLAUDECODE_") ||
strings.HasPrefix(key, "CLAUDE_CODE_")
}This prevents nested Claude Code sessions from inheriting the outer session's configuration, which would cause confusing behavior.
Session Resume Logic (server/pkg/agent/claude.go:457-462)
The resolveSessionID function handles a subtle edge case: when --resume is requested but Claude emits a different session ID AND fails, the resume didn't actually land. Returning empty string lets the daemon's retry logic start fresh instead of persisting a phantom session.
Dynamic Model Discovery (server/pkg/agent/models.go)
For providers with evolving model catalogs (Cursor, Hermes, Kimi, OpenCode, Pi, OpenClaw), models are discovered at runtime by actually launching the CLI and parsing its output. Results are cached for 60 seconds. Static catalogs are used for stable providers (Claude, Codex, Gemini, Copilot).
The ACP (Agent Communication Protocol) discovery flow (models.go:386-502) is particularly clever -- it spins up a throwaway hermes acp or kimi acp process, drives just enough of the protocol to receive the model list from session/new, and tears it down.