03 - Key Abstractions, Patterns & Design Decisions
NestJS Module Pattern
Each feature follows the standard NestJS dependency injection pattern:
FeatureModule
├── feature.controller.ts → HTTP route handlers, decorators, DTOs
├── feature.service.ts → Business logic, database operations
├── feature.module.ts → Module declaration, imports, exports
└── dto/ → Request/response validation schemas
Modules are registered in apps/api/src/app.module.ts and can import/export services from other modules. The ConfigModule is global, making configuration available everywhere without explicit imports.
Guards are applied at the module or controller level:
HybridAuthGuard— default for most endpointsThrottlerGuard— global rate limiting (100 req/60s)RequireRoles— per-endpoint role authorization
Server Actions (Next.js)
The frontend uses Next.js server actions ('use server' directive) for mutations, colocated with their related UI or in apps/app/src/actions/. Server actions:
- Run on the server with full access to Prisma and environment secrets
- Handle auth via
auth.api.getSession()with cookie-based session - Revalidate paths/tags after mutations
- Avoid the need for separate API calls for frontend-originated writes
Custom Hooks
The frontend defines domain-specific hooks in apps/app/src/hooks/:
| Hook | Purpose |
|---|---|
use-api-swr |
SWR-based data fetching wrapper for the NestJS API |
use-api |
Base API client with organization context headers |
use-tasks-api |
Task CRUD operations |
use-risks |
Risk register data and mutations |
use-vendors |
Vendor management operations |
use-findings-api |
Security findings queries |
use-people-api |
Personnel management |
use-comments-api |
Comment thread operations |
use-organization-members |
Org member list and management |
use-integration-platform |
Integration connection status |
use-access-requests |
Access request management |
use-task-items |
Task sub-item (checklist) operations |
use-api-keys |
API key management |
use-data-table |
Generic table state (sorting, filtering, pagination) |
use-domain |
Domain/URL utilities |
use-mobile |
Responsive breakpoint detection |
Integration Registry
File: packages/integration-platform/src/registry/index.ts
The integration platform uses a singleton registry pattern with runtime manifest validation:
class IntegrationRegistryImpl implements IntegrationRegistry {
private manifests: Map<string, IntegrationManifest>;
constructor(manifests: IntegrationManifest[]) {
for (const manifest of manifests) {
this.validateManifest(manifest); // Throws on invalid
this.manifests.set(manifest.id, manifest);
}
}
}
export const registry = new IntegrationRegistryImpl(allManifests);Validation rules:
- Must have a non-empty
idandname - Must declare an
authstrategy - OAuth2 integrations must provide
authorizeUrlandtokenUrl - Must have at least one
capability - Duplicate IDs throw at startup
This pattern ensures all integrations are validated at application startup and provides a consistent interface for querying integration capabilities and configuration.
HybridAuthGuard
File: apps/api/src/auth/hybrid-auth.guard.ts
The guard inspects request headers to determine the authentication strategy:
-
API Key path: Looks for
X-API-Keyheader. Hashes the key with SHA256 (+ optional salt), looks up in the database, checks expiration, and attaches organization context to the request. -
JWT path: Validates
Authorization: Bearer <token>against the Better Auth JWKS endpoint. Requires an explicitX-Organization-Idheader and verifies the user's membership in that organization via database lookup.
JWKS resilience:
- Keys cached with 60-second max age and 10-second cooldown
- On key mismatch (e.g., after key rotation), retries with a fresh JWKS fetch
- Graceful error handling for connection failures
Design rationale: A single guard handles both auth strategies because every API endpoint needs organization context. The guard normalizes the auth result into a common AuthContext shape regardless of strategy, so controllers don't need to know how the user authenticated.
PostgreSQL Advisory Locks
Used in vendor risk assessment to prevent concurrent writes to the same vendor record:
await prisma.$executeRawUnsafe(
`SELECT pg_advisory_lock($1)`,
lockKey
);
try {
// Write vendor assessment
} finally {
await prisma.$executeRawUnsafe(
`SELECT pg_advisory_unlock($1)`,
lockKey
);
}This avoids race conditions when multiple Trigger.dev tasks process the same vendor simultaneously (e.g., during batch onboarding).
T3 Env (Type-Safe Environment Variables)
File: apps/app/src/env.mjs
Uses @t3-oss/env-nextjs with Zod schemas to validate environment variables at build time:
export const env = createEnv({
server: {
DATABASE_URL: z.string().min(1),
AUTH_SECRET: z.string(),
OPENAI_API_KEY: z.string().optional(),
// ... 40+ server variables
},
client: {
NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(),
NEXT_PUBLIC_API_URL: z.string().optional(),
// ... 10 client variables
},
skipValidation: !!process.env.CI || !!process.env.SKIP_ENV_VALIDATION,
});- Server variables are only accessible in server components/actions
- Client variables (prefixed
NEXT_PUBLIC_) are bundled into the client - Validation is skipped in CI and Docker builds via
SKIP_ENV_VALIDATION
Prisma Schema Composition
File: packages/db/scripts/combine-schemas.js
Instead of a single monolithic schema.prisma, the project splits models across 30 domain-specific .prisma files in packages/db/prisma/schema/:
packages/db/prisma/
├── schema.prisma ← Base config (datasource, generator)
└── schema/
├── auth.prisma ← User, Session, Account
├── organization.prisma ← Organization, Member
├── task.prisma ← Task model
├── policy.prisma ← Policy model
└── ... (26 more)
The combine-schemas.js script concatenates them into dist/schema.prisma with separator comments. This runs during the build pipeline (configured in turbo.json as the db:generate task) and produces the combined schema that Prisma uses for client generation and migrations.
Trade-off: This approach improves developer experience (smaller files, domain grouping) at the cost of a build step. Prisma natively supports multi-file schemas only in newer versions, so the custom script provides backwards compatibility and control over the output format.
AI Chat Tool System
Files: apps/app/src/data/tools/
The AI chat assistant uses Vercel AI SDK's tool-calling system with four tool modules:
| Tool File | Purpose |
|---|---|
organization.ts |
Fetch org details, members, settings |
policies.ts |
Query policy documents and status |
risks-tool.ts |
Retrieve risk register entries |
user.ts |
Get current user info and permissions |
These tools are passed to streamText() in the chat route handler, allowing the LLM to fetch live organization data during conversations rather than relying on stale context.