2908 lines
91 KiB
Markdown
2908 lines
91 KiB
Markdown
|
|
# L0 — Foundation (Competing Plan — Claude Code)
|
|||
|
|
|
|||
|
|
**Duration:** Days 6–9 (4 days)
|
|||
|
|
**Parallelism:** None — sequential foundation; everything depends on this
|
|||
|
|
**References:** `07-DATABASE-SCHEMA.md`, `10-AUTH-AND-PERMISSIONS.md`, `11-REALTIME-AND-BACKGROUND-JOBS.md`, `14-TECHNICAL-DECISIONS.md`, `15-DESIGN-TOKENS.md`, `SECURITY-GUIDELINES.md`
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 1. Baseline Critique
|
|||
|
|
|
|||
|
|
### What the baseline gets right
|
|||
|
|
|
|||
|
|
- **Step ordering is sound.** Scaffold → Docker → DB → Auth → Infra → Layout → Security is the correct dependency chain.
|
|||
|
|
- **Drizzle schema grouping** (1 file per domain) is practical and matches the project scale.
|
|||
|
|
- **Docker multi-stage build** with standalone output is correct for production.
|
|||
|
|
- **Nginx security headers and rate limiting zones** are thorough and match `SECURITY-GUIDELINES.md`.
|
|||
|
|
- **Acceptance criteria** are concrete and testable.
|
|||
|
|
|
|||
|
|
### What's missing or wrong
|
|||
|
|
|
|||
|
|
1. **Route structure is wrong.** The baseline uses `(crm)/` as the route group. The locked file organization in `PROMPT-CLAUDE-CODE.md` specifies `(dashboard)/[portSlug]/` with a dynamic port slug segment. This is a fundamental routing decision — every page URL, every link, every breadcrumb depends on it. Getting this wrong means rework across every layer.
|
|||
|
|
|
|||
|
|
2. **Session storage contradicts `14-TECHNICAL-DECISIONS.md`.** The baseline stores sessions in PostgreSQL. The locked tech decisions say `better-auth/plugins/redis` for session persistence. Redis sessions are faster and reduce DB load for the most frequent operation in the system (session validation on every request).
|
|||
|
|
|
|||
|
|
3. **BullMQ queue names don't match the spec.** The baseline invents names like `notification-processing`, `email-sync`, `email-send`. The spec in `11-REALTIME-AND-BACKGROUND-JOBS.md` defines exactly 10 queues: `email`, `documents`, `notifications`, `import`, `export`, `reports`, `webhooks`, `maintenance`, `ai`, `bulk`. Use the spec names.
|
|||
|
|
|
|||
|
|
4. **Role count mismatch.** The baseline seeds 4 roles (`super_admin`, `director`, `sales`, `readonly`). The spec in `10-AUTH-AND-PERMISSIONS.md` Section 2.4 defines 5 system roles: `super_admin`, `director`, `sales_manager`, `sales_agent`, `viewer`. All 5 must be seeded.
|
|||
|
|
|
|||
|
|
5. **Missing environment variables.** The `.env.example` omits `CSRF_SECRET`, `EMAIL_CREDENTIAL_KEY`, `DOCUMENSO_WEBHOOK_SECRET`, `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET`, `PUBLIC_SITE_URL` — all required by `SECURITY-GUIDELINES.md`.
|
|||
|
|
|
|||
|
|
6. **Socket.io + Next.js integration is hand-waved.** "Create Socket.io server alongside Next.js custom server" is not a plan. Next.js standalone mode runs `node server.js` with no extension point for WebSocket servers. This needs a custom `server.ts` entry point that boots both Next.js and Socket.io, or a separate worker process. The approach must be specified.
|
|||
|
|
|
|||
|
|
7. **No Drizzle relations defined.** Schemas without `relations()` declarations mean no relational query builder (`db.query.*.findFirst({ with: ... })`). The auth middleware pseudocode in the baseline itself uses `with: { role: true }` — this requires relation definitions.
|
|||
|
|
|
|||
|
|
8. **No application-level rate limiting.** The baseline only has nginx rate limiting. `SECURITY-GUIDELINES.md` Section 6.1 requires Redis sliding window rate limiting at the application layer with `X-RateLimit-*` response headers.
|
|||
|
|
|
|||
|
|
9. **No `pgcrypto` extension.** Security guidelines require AES-256-GCM encryption for email credentials and pgcrypto for Google Calendar tokens. The database init must enable this extension.
|
|||
|
|
|
|||
|
|
10. **No env validation.** No Zod schema validating that all required environment variables are present and well-formed at startup. The app should fail fast with a clear error, not crash mid-request when `process.env.DATABASE_URL` is undefined.
|
|||
|
|
|
|||
|
|
11. **No custom server entry point.** BullMQ workers need a long-running process. Next.js standalone `server.js` doesn't run background workers. Need either a custom server wrapper or a separate worker process in Docker Compose.
|
|||
|
|
|
|||
|
|
12. **Missing `lint-staged`.** Pre-commit hook runs ESLint on the entire project. With `lint-staged` + `husky`, only staged files are linted — dramatically faster as the project grows.
|
|||
|
|
|
|||
|
|
13. **No health check endpoints.** Docker Compose health checks need an HTTP endpoint. The baseline's postgres and redis have health checks, but `crm-app` has none.
|
|||
|
|
|
|||
|
|
14. **`next.config.ts` incomplete.** Needs `serverExternalPackages` for `pino`, `bullmq`, `ioredis`, `minio`, `postgres` — these packages use native Node.js APIs that Next.js's bundler will break if it tries to bundle them.
|
|||
|
|
|
|||
|
|
### What I'd change structurally
|
|||
|
|
|
|||
|
|
- **Split the custom server concern early.** Define a `server.ts` that boots Next.js + Socket.io + BullMQ workers together in development, and use separate Docker Compose services for `crm-app` (Next.js) and `crm-worker` (BullMQ) in production.
|
|||
|
|
- **Define Zod env schema as the very first file.** Every other module imports validated config from it.
|
|||
|
|
- **Port slug in URLs from Day 1.** `/(dashboard)/[portSlug]/clients` not `/(crm)/clients`. This avoids a painful routing migration later.
|
|||
|
|
- **Add a `lib/api/helpers.ts` pattern** for API route handlers — a composable middleware chain (`withAuth → withPort → withPermission → handler`) rather than ad-hoc checks in each route.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 2. Implementation Plan
|
|||
|
|
|
|||
|
|
### Day 1 — Scaffold + Docker + Database Schema
|
|||
|
|
|
|||
|
|
#### Morning: Project Init (2 hours)
|
|||
|
|
|
|||
|
|
**Step 1: Create Next.js project**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm create next-app@latest port-nimara-crm \
|
|||
|
|
--typescript --tailwind --eslint --app \
|
|||
|
|
--src-dir --import-alias "@/*" \
|
|||
|
|
--use-pnpm
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 2: Environment validation**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/env.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { z } from 'zod';
|
|||
|
|
|
|||
|
|
const envSchema = z.object({
|
|||
|
|
// Database
|
|||
|
|
DATABASE_URL: z.string().url().startsWith('postgresql://'),
|
|||
|
|
|
|||
|
|
// Redis
|
|||
|
|
REDIS_URL: z.string().url().startsWith('redis://'),
|
|||
|
|
|
|||
|
|
// Auth
|
|||
|
|
BETTER_AUTH_SECRET: z.string().min(32),
|
|||
|
|
BETTER_AUTH_URL: z.string().url(),
|
|||
|
|
CSRF_SECRET: z.string().min(32),
|
|||
|
|
|
|||
|
|
// MinIO
|
|||
|
|
MINIO_ENDPOINT: z.string().min(1),
|
|||
|
|
MINIO_PORT: z.coerce.number().int().positive(),
|
|||
|
|
MINIO_ACCESS_KEY: z.string().min(1),
|
|||
|
|
MINIO_SECRET_KEY: z.string().min(1),
|
|||
|
|
MINIO_BUCKET: z.string().min(1),
|
|||
|
|
MINIO_USE_SSL: z.enum(['true', 'false']).transform((v) => v === 'true'),
|
|||
|
|
|
|||
|
|
// Documenso
|
|||
|
|
DOCUMENSO_API_URL: z.string().url(),
|
|||
|
|
DOCUMENSO_API_KEY: z.string().min(1),
|
|||
|
|
DOCUMENSO_WEBHOOK_SECRET: z.string().min(16),
|
|||
|
|
|
|||
|
|
// Email
|
|||
|
|
SMTP_HOST: z.string().min(1),
|
|||
|
|
SMTP_PORT: z.coerce.number().int().positive(),
|
|||
|
|
|
|||
|
|
// Encryption
|
|||
|
|
EMAIL_CREDENTIAL_KEY: z
|
|||
|
|
.string()
|
|||
|
|
.length(64)
|
|||
|
|
.regex(/^[0-9a-f]+$/i, 'Must be 64-char hex'),
|
|||
|
|
|
|||
|
|
// Google OAuth
|
|||
|
|
GOOGLE_CLIENT_ID: z.string().optional(),
|
|||
|
|
GOOGLE_CLIENT_SECRET: z.string().optional(),
|
|||
|
|
|
|||
|
|
// OpenAI
|
|||
|
|
OPENAI_API_KEY: z.string().optional(),
|
|||
|
|
|
|||
|
|
// App
|
|||
|
|
APP_URL: z.string().url(),
|
|||
|
|
PUBLIC_SITE_URL: z.string().url(),
|
|||
|
|
NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
|
|||
|
|
LOG_LEVEL: z.enum(['fatal', 'error', 'warn', 'info', 'debug', 'trace']).default('info'),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export type Env = z.infer<typeof envSchema>;
|
|||
|
|
|
|||
|
|
function validateEnv(): Env {
|
|||
|
|
const result = envSchema.safeParse(process.env);
|
|||
|
|
if (!result.success) {
|
|||
|
|
console.error('❌ Invalid environment variables:');
|
|||
|
|
for (const issue of result.error.issues) {
|
|||
|
|
console.error(` ${issue.path.join('.')}: ${issue.message}`);
|
|||
|
|
}
|
|||
|
|
process.exit(1);
|
|||
|
|
}
|
|||
|
|
return result.data;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export const env = validateEnv();
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 3: TypeScript strict config**
|
|||
|
|
|
|||
|
|
**File:** `tsconfig.json`
|
|||
|
|
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"compilerOptions": {
|
|||
|
|
"strict": true,
|
|||
|
|
"noUncheckedIndexedAccess": true,
|
|||
|
|
"noImplicitReturns": true,
|
|||
|
|
"noFallthroughCasesInSwitch": true,
|
|||
|
|
"forceConsistentCasingInFileNames": true,
|
|||
|
|
"target": "ES2022",
|
|||
|
|
"lib": ["dom", "dom.iterable", "ES2022"],
|
|||
|
|
"module": "esnext",
|
|||
|
|
"moduleResolution": "bundler",
|
|||
|
|
"jsx": "preserve",
|
|||
|
|
"incremental": true,
|
|||
|
|
"plugins": [{ "name": "next" }],
|
|||
|
|
"paths": { "@/*": ["./src/*"] },
|
|||
|
|
"skipLibCheck": true
|
|||
|
|
},
|
|||
|
|
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
|
|||
|
|
"exclude": ["node_modules"]
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 4: ESLint + Prettier + lint-staged**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add -D prettier eslint-config-prettier lint-staged
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `.eslintrc.json`
|
|||
|
|
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"extends": ["next/core-web-vitals", "next/typescript", "prettier"],
|
|||
|
|
"rules": {
|
|||
|
|
"@typescript-eslint/no-explicit-any": "error",
|
|||
|
|
"@typescript-eslint/no-unused-vars": ["error", { "argsIgnorePattern": "^_" }],
|
|||
|
|
"import/order": [
|
|||
|
|
"warn",
|
|||
|
|
{
|
|||
|
|
"groups": ["builtin", "external", "internal", "parent", "sibling"],
|
|||
|
|
"newlines-between": "always",
|
|||
|
|
"alphabetize": { "order": "asc" }
|
|||
|
|
}
|
|||
|
|
]
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `.prettierrc`
|
|||
|
|
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"semi": true,
|
|||
|
|
"singleQuote": true,
|
|||
|
|
"trailingComma": "all",
|
|||
|
|
"tabWidth": 2,
|
|||
|
|
"printWidth": 100
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `.lintstagedrc.json`
|
|||
|
|
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"*.{ts,tsx}": ["eslint --fix", "prettier --write"],
|
|||
|
|
"*.{json,md,css}": ["prettier --write"]
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 5: Git setup**
|
|||
|
|
|
|||
|
|
**File:** `.gitignore`
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
node_modules/
|
|||
|
|
.next/
|
|||
|
|
.env
|
|||
|
|
.env.local
|
|||
|
|
.env.production
|
|||
|
|
*.pem
|
|||
|
|
*.key
|
|||
|
|
drizzle/*.sql
|
|||
|
|
coverage/
|
|||
|
|
.turbo/
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `.env.example` — all variables from `src/lib/env.ts` schema with placeholder values (no real secrets).
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add -D husky
|
|||
|
|
pnpm exec husky init
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `.husky/pre-commit`
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
#!/bin/sh
|
|||
|
|
pnpm exec lint-staged
|
|||
|
|
# Verify no .env files staged
|
|||
|
|
if git diff --cached --name-only | grep -qE '\.env($|\.)'; then
|
|||
|
|
echo "❌ .env files must not be committed"
|
|||
|
|
exit 1
|
|||
|
|
fi
|
|||
|
|
# Scan for potential secrets
|
|||
|
|
if git diff --cached -U0 | grep -qiE '(password|secret|api_key|access_key)\s*[:=]\s*["\x27][A-Za-z0-9+/=]{16,}'; then
|
|||
|
|
echo "⚠️ Possible hardcoded secret detected. Review staged changes."
|
|||
|
|
exit 1
|
|||
|
|
fi
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 6: Directory structure**
|
|||
|
|
|
|||
|
|
Create all directories per the locked file organization in `PROMPT-CLAUDE-CODE.md`. Key difference from baseline: `(dashboard)/[portSlug]/` not `(crm)/`.
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
src/
|
|||
|
|
├── app/
|
|||
|
|
│ ├── (auth)/
|
|||
|
|
│ │ ├── login/
|
|||
|
|
│ │ ├── set-password/
|
|||
|
|
│ │ └── reset-password/
|
|||
|
|
│ ├── (dashboard)/
|
|||
|
|
│ │ ├── [portSlug]/
|
|||
|
|
│ │ │ ├── clients/
|
|||
|
|
│ │ │ ├── interests/
|
|||
|
|
│ │ │ ├── berths/
|
|||
|
|
│ │ │ ├── expenses/
|
|||
|
|
│ │ │ ├── invoices/
|
|||
|
|
│ │ │ ├── email/
|
|||
|
|
│ │ │ ├── reminders/
|
|||
|
|
│ │ │ ├── documents/
|
|||
|
|
│ │ │ ├── reports/
|
|||
|
|
│ │ │ ├── settings/
|
|||
|
|
│ │ │ └── admin/
|
|||
|
|
│ │ │ ├── users/
|
|||
|
|
│ │ │ ├── roles/
|
|||
|
|
│ │ │ ├── ports/
|
|||
|
|
│ │ │ ├── settings/
|
|||
|
|
│ │ │ ├── audit/
|
|||
|
|
│ │ │ ├── webhooks/
|
|||
|
|
│ │ │ ├── reports/
|
|||
|
|
│ │ │ ├── templates/
|
|||
|
|
│ │ │ ├── forms/
|
|||
|
|
│ │ │ ├── tags/
|
|||
|
|
│ │ │ ├── import/
|
|||
|
|
│ │ │ ├── monitoring/
|
|||
|
|
│ │ │ ├── backup/
|
|||
|
|
│ │ │ ├── custom-fields/
|
|||
|
|
│ │ │ └── onboarding/
|
|||
|
|
│ │ └── layout.tsx
|
|||
|
|
│ ├── api/
|
|||
|
|
│ │ ├── auth/[...all]/
|
|||
|
|
│ │ ├── health/
|
|||
|
|
│ │ ├── v1/
|
|||
|
|
│ │ │ ├── clients/
|
|||
|
|
│ │ │ ├── interests/
|
|||
|
|
│ │ │ ├── berths/
|
|||
|
|
│ │ │ ├── expenses/
|
|||
|
|
│ │ │ ├── invoices/
|
|||
|
|
│ │ │ ├── files/
|
|||
|
|
│ │ │ ├── email/
|
|||
|
|
│ │ │ ├── reminders/
|
|||
|
|
│ │ │ ├── documents/
|
|||
|
|
│ │ │ ├── reports/
|
|||
|
|
│ │ │ ├── notifications/
|
|||
|
|
│ │ │ ├── calendar/
|
|||
|
|
│ │ │ ├── admin/
|
|||
|
|
│ │ │ └── search/
|
|||
|
|
│ │ ├── public/
|
|||
|
|
│ │ └── webhooks/
|
|||
|
|
│ ├── layout.tsx
|
|||
|
|
│ └── not-found.tsx
|
|||
|
|
├── components/
|
|||
|
|
│ ├── ui/ # shadcn/ui
|
|||
|
|
│ ├── layout/ # Sidebar, Topbar, PortSwitcher, Breadcrumbs
|
|||
|
|
│ ├── shared/ # Reusable composed components
|
|||
|
|
│ └── [domain]/ # Per-domain (clients/, interests/, berths/, etc.)
|
|||
|
|
├── lib/
|
|||
|
|
│ ├── db/
|
|||
|
|
│ │ ├── index.ts # Drizzle client
|
|||
|
|
│ │ ├── schema/ # 1 file per domain + relations
|
|||
|
|
│ │ │ ├── index.ts
|
|||
|
|
│ │ │ ├── ports.ts
|
|||
|
|
│ │ │ ├── users.ts
|
|||
|
|
│ │ │ ├── clients.ts
|
|||
|
|
│ │ │ ├── interests.ts
|
|||
|
|
│ │ │ ├── berths.ts
|
|||
|
|
│ │ │ ├── documents.ts
|
|||
|
|
│ │ │ ├── financial.ts
|
|||
|
|
│ │ │ ├── email.ts
|
|||
|
|
│ │ │ ├── operations.ts
|
|||
|
|
│ │ │ ├── system.ts
|
|||
|
|
│ │ │ └── relations.ts # ← NEW: all relations() in one file
|
|||
|
|
│ │ ├── migrations/
|
|||
|
|
│ │ └── seed.ts
|
|||
|
|
│ ├── auth/
|
|||
|
|
│ │ ├── index.ts # Better Auth server
|
|||
|
|
│ │ ├── client.ts # Better Auth React hooks
|
|||
|
|
│ │ └── permissions.ts # Permission check helpers
|
|||
|
|
│ ├── api/
|
|||
|
|
│ │ └── helpers.ts # ← NEW: withAuth/withPort/withPermission composable chain
|
|||
|
|
│ ├── services/ # Business logic (1 file per domain, added in L1+)
|
|||
|
|
│ ├── validators/ # Zod schemas (1 file per domain, added in L1+)
|
|||
|
|
│ ├── queue/
|
|||
|
|
│ │ ├── index.ts # Queue definitions (10 queues matching spec)
|
|||
|
|
│ │ ├── scheduler.ts # Recurring job registration
|
|||
|
|
│ │ └── workers/ # Worker processors (stubs)
|
|||
|
|
│ ├── socket/
|
|||
|
|
│ │ ├── server.ts # Socket.io setup
|
|||
|
|
│ │ ├── events.ts # Event type definitions
|
|||
|
|
│ │ └── rooms.ts # Room join/leave logic
|
|||
|
|
│ ├── minio/
|
|||
|
|
│ │ └── index.ts
|
|||
|
|
│ ├── email/
|
|||
|
|
│ │ └── index.ts # Nodemailer transporter factory
|
|||
|
|
│ ├── redis.ts
|
|||
|
|
│ ├── rate-limit.ts # ← NEW: Redis sliding window rate limiter
|
|||
|
|
│ ├── logger.ts
|
|||
|
|
│ ├── errors.ts
|
|||
|
|
│ ├── constants.ts
|
|||
|
|
│ ├── audit.ts # ← NEW: dedicated audit log module
|
|||
|
|
│ ├── env.ts
|
|||
|
|
│ └── utils.ts
|
|||
|
|
├── hooks/
|
|||
|
|
│ ├── use-auth.ts
|
|||
|
|
│ ├── use-port.ts
|
|||
|
|
│ ├── use-socket.ts
|
|||
|
|
│ └── use-permissions.ts
|
|||
|
|
├── providers/
|
|||
|
|
│ ├── query-provider.tsx
|
|||
|
|
│ ├── socket-provider.tsx
|
|||
|
|
│ └── port-provider.tsx
|
|||
|
|
├── stores/
|
|||
|
|
│ └── ui-store.ts
|
|||
|
|
├── types/
|
|||
|
|
│ ├── api.ts
|
|||
|
|
│ ├── auth.ts
|
|||
|
|
│ └── domain.ts
|
|||
|
|
├── jobs/ # ← matches locked structure
|
|||
|
|
│ └── (empty — populated in L2+)
|
|||
|
|
├── emails/ # MJML templates
|
|||
|
|
│ └── (empty — populated in L2+)
|
|||
|
|
└── server.ts # ← NEW: custom server entry point
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Morning: Docker Compose (1 hour)
|
|||
|
|
|
|||
|
|
**Step 7: Docker Compose**
|
|||
|
|
|
|||
|
|
**File:** `docker-compose.yml`
|
|||
|
|
|
|||
|
|
```yaml
|
|||
|
|
services:
|
|||
|
|
postgres:
|
|||
|
|
image: postgres:16-alpine
|
|||
|
|
environment:
|
|||
|
|
POSTGRES_DB: port_nimara_crm
|
|||
|
|
POSTGRES_USER: ${DB_USER:-crm}
|
|||
|
|
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
|||
|
|
volumes:
|
|||
|
|
- pgdata:/var/lib/postgresql/data
|
|||
|
|
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/01-init.sql
|
|||
|
|
healthcheck:
|
|||
|
|
test: ['CMD-SHELL', 'pg_isready -U crm -d port_nimara_crm']
|
|||
|
|
interval: 10s
|
|||
|
|
timeout: 5s
|
|||
|
|
retries: 5
|
|||
|
|
networks:
|
|||
|
|
- internal
|
|||
|
|
|
|||
|
|
redis:
|
|||
|
|
image: redis:7-alpine
|
|||
|
|
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 256mb --maxmemory-policy allkeys-lru
|
|||
|
|
volumes:
|
|||
|
|
- redisdata:/data
|
|||
|
|
healthcheck:
|
|||
|
|
test: ['CMD', 'redis-cli', '-a', '${REDIS_PASSWORD}', 'ping']
|
|||
|
|
interval: 10s
|
|||
|
|
timeout: 5s
|
|||
|
|
retries: 5
|
|||
|
|
networks:
|
|||
|
|
- internal
|
|||
|
|
|
|||
|
|
crm-app:
|
|||
|
|
build:
|
|||
|
|
context: .
|
|||
|
|
dockerfile: Dockerfile
|
|||
|
|
env_file: .env
|
|||
|
|
depends_on:
|
|||
|
|
postgres:
|
|||
|
|
condition: service_healthy
|
|||
|
|
redis:
|
|||
|
|
condition: service_healthy
|
|||
|
|
healthcheck:
|
|||
|
|
test:
|
|||
|
|
['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:3000/api/health']
|
|||
|
|
interval: 15s
|
|||
|
|
timeout: 5s
|
|||
|
|
retries: 3
|
|||
|
|
networks:
|
|||
|
|
- internal
|
|||
|
|
|
|||
|
|
crm-worker:
|
|||
|
|
build:
|
|||
|
|
context: .
|
|||
|
|
dockerfile: Dockerfile.worker
|
|||
|
|
env_file: .env
|
|||
|
|
depends_on:
|
|||
|
|
postgres:
|
|||
|
|
condition: service_healthy
|
|||
|
|
redis:
|
|||
|
|
condition: service_healthy
|
|||
|
|
networks:
|
|||
|
|
- internal
|
|||
|
|
|
|||
|
|
nginx:
|
|||
|
|
image: nginx:alpine
|
|||
|
|
ports:
|
|||
|
|
- '443:443'
|
|||
|
|
- '80:80'
|
|||
|
|
volumes:
|
|||
|
|
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
|||
|
|
- ./nginx/certs:/etc/nginx/certs:ro
|
|||
|
|
depends_on:
|
|||
|
|
crm-app:
|
|||
|
|
condition: service_healthy
|
|||
|
|
networks:
|
|||
|
|
- internal
|
|||
|
|
|
|||
|
|
volumes:
|
|||
|
|
pgdata:
|
|||
|
|
redisdata:
|
|||
|
|
|
|||
|
|
networks:
|
|||
|
|
internal:
|
|||
|
|
driver: bridge
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `docker-compose.dev.yml` (override)
|
|||
|
|
|
|||
|
|
```yaml
|
|||
|
|
services:
|
|||
|
|
postgres:
|
|||
|
|
ports:
|
|||
|
|
- '5432:5432'
|
|||
|
|
redis:
|
|||
|
|
ports:
|
|||
|
|
- '6379:6379'
|
|||
|
|
crm-app:
|
|||
|
|
build:
|
|||
|
|
dockerfile: Dockerfile.dev
|
|||
|
|
ports:
|
|||
|
|
- '3000:3000'
|
|||
|
|
volumes:
|
|||
|
|
- .:/app
|
|||
|
|
- /app/node_modules
|
|||
|
|
command: pnpm dev
|
|||
|
|
crm-worker:
|
|||
|
|
profiles: ['worker'] # Optional in dev — server.ts runs workers inline
|
|||
|
|
nginx:
|
|||
|
|
profiles: ['nginx'] # Skip nginx in dev
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `docker/postgres/init.sql`
|
|||
|
|
|
|||
|
|
```sql
|
|||
|
|
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
|
|||
|
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `Dockerfile`
|
|||
|
|
|
|||
|
|
```dockerfile
|
|||
|
|
FROM node:20-alpine AS deps
|
|||
|
|
RUN corepack enable && corepack prepare pnpm@latest --activate
|
|||
|
|
WORKDIR /app
|
|||
|
|
COPY package.json pnpm-lock.yaml ./
|
|||
|
|
RUN pnpm install --frozen-lockfile --prod=false
|
|||
|
|
|
|||
|
|
FROM node:20-alpine AS builder
|
|||
|
|
RUN corepack enable && corepack prepare pnpm@latest --activate
|
|||
|
|
WORKDIR /app
|
|||
|
|
COPY --from=deps /app/node_modules ./node_modules
|
|||
|
|
COPY . .
|
|||
|
|
ENV NEXT_TELEMETRY_DISABLED=1
|
|||
|
|
RUN pnpm build
|
|||
|
|
|
|||
|
|
FROM node:20-alpine AS runner
|
|||
|
|
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs
|
|||
|
|
WORKDIR /app
|
|||
|
|
ENV NODE_ENV=production
|
|||
|
|
ENV NEXT_TELEMETRY_DISABLED=1
|
|||
|
|
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
|
|||
|
|
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
|
|||
|
|
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
|
|||
|
|
USER nextjs
|
|||
|
|
EXPOSE 3000
|
|||
|
|
CMD ["node", "server.js"]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `Dockerfile.worker`
|
|||
|
|
|
|||
|
|
```dockerfile
|
|||
|
|
FROM node:20-alpine AS deps
|
|||
|
|
RUN corepack enable && corepack prepare pnpm@latest --activate
|
|||
|
|
WORKDIR /app
|
|||
|
|
COPY package.json pnpm-lock.yaml ./
|
|||
|
|
RUN pnpm install --frozen-lockfile --prod
|
|||
|
|
|
|||
|
|
FROM node:20-alpine AS runner
|
|||
|
|
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 worker
|
|||
|
|
WORKDIR /app
|
|||
|
|
COPY --from=deps /app/node_modules ./node_modules
|
|||
|
|
COPY dist/worker.js ./worker.js
|
|||
|
|
USER worker
|
|||
|
|
CMD ["node", "worker.js"]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `next.config.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import type { NextConfig } from 'next';
|
|||
|
|
|
|||
|
|
const nextConfig: NextConfig = {
|
|||
|
|
output: 'standalone',
|
|||
|
|
serverExternalPackages: [
|
|||
|
|
'pino',
|
|||
|
|
'pino-pretty',
|
|||
|
|
'bullmq',
|
|||
|
|
'ioredis',
|
|||
|
|
'minio',
|
|||
|
|
'postgres',
|
|||
|
|
'better-auth',
|
|||
|
|
'argon2',
|
|||
|
|
'nodemailer',
|
|||
|
|
],
|
|||
|
|
images: {
|
|||
|
|
remotePatterns: [{ protocol: 'https', hostname: '*.portnimara.com' }],
|
|||
|
|
},
|
|||
|
|
experimental: {
|
|||
|
|
typedRoutes: true,
|
|||
|
|
},
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
export default nextConfig;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Afternoon: Database Layer (4 hours)
|
|||
|
|
|
|||
|
|
**Step 8: Install Drizzle + driver**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add drizzle-orm postgres
|
|||
|
|
pnpm add -D drizzle-kit
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 9: Drizzle config**
|
|||
|
|
|
|||
|
|
**File:** `drizzle.config.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { defineConfig } from 'drizzle-kit';
|
|||
|
|
|
|||
|
|
export default defineConfig({
|
|||
|
|
schema: './src/lib/db/schema',
|
|||
|
|
out: './src/lib/db/migrations',
|
|||
|
|
dialect: 'postgresql',
|
|||
|
|
dbCredentials: {
|
|||
|
|
url: process.env.DATABASE_URL!,
|
|||
|
|
},
|
|||
|
|
verbose: true,
|
|||
|
|
strict: true,
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 10: Drizzle client**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/index.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { drizzle } from 'drizzle-orm/postgres-js';
|
|||
|
|
import postgres from 'postgres';
|
|||
|
|
|
|||
|
|
import * as schema from './schema';
|
|||
|
|
|
|||
|
|
const connectionString = process.env.DATABASE_URL!;
|
|||
|
|
|
|||
|
|
// Connection pool for queries
|
|||
|
|
const queryClient = postgres(connectionString, {
|
|||
|
|
max: 20,
|
|||
|
|
idle_timeout: 20,
|
|||
|
|
connect_timeout: 10,
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export const db = drizzle(queryClient, { schema, logger: process.env.NODE_ENV === 'development' });
|
|||
|
|
|
|||
|
|
export type Database = typeof db;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 11: Define ALL 49 table schemas**
|
|||
|
|
|
|||
|
|
Each schema file follows these conventions from `07-DATABASE-SCHEMA.md`:
|
|||
|
|
|
|||
|
|
- UUID primary keys: `uuid('id').primaryKey().defaultRandom()`
|
|||
|
|
- Timestamps: `timestamp('created_at', { withTimezone: true }).notNull().defaultNow()`
|
|||
|
|
- Port scoping: `uuid('port_id').notNull().references(() => ports.id, { onDelete: 'cascade' })`
|
|||
|
|
- Soft deletes: `timestamp('archived_at', { withTimezone: true })`
|
|||
|
|
|
|||
|
|
Schema files (matching `07-DATABASE-SCHEMA.md` exactly):
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/ports.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { pgTable, uuid, varchar, boolean, timestamp, jsonb } from 'drizzle-orm/pg-core';
|
|||
|
|
|
|||
|
|
export const ports = pgTable('ports', {
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
name: varchar('name', { length: 100 }).notNull(),
|
|||
|
|
slug: varchar('slug', { length: 50 }).notNull().unique(),
|
|||
|
|
defaultCurrency: varchar('default_currency', { length: 3 }).notNull().default('USD'),
|
|||
|
|
timezone: varchar('timezone', { length: 50 }).notNull().default('America/Anguilla'),
|
|||
|
|
settings: jsonb('settings').$type<PortSettings>().default({}),
|
|||
|
|
branding: jsonb('branding').$type<PortBranding>().default({}),
|
|||
|
|
isActive: boolean('is_active').notNull().default(true),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export type PortSettings = {
|
|||
|
|
followUpReminderDays?: number[];
|
|||
|
|
tenureExpiryWarningMonths?: number;
|
|||
|
|
eoiReminderIntervalDays?: number;
|
|||
|
|
berthStatusRules?: Array<{
|
|||
|
|
trigger: string;
|
|||
|
|
mode: 'auto' | 'suggest' | 'off';
|
|||
|
|
targetStatus: string;
|
|||
|
|
}>;
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
export type PortBranding = {
|
|||
|
|
logoUrl?: string;
|
|||
|
|
primaryColor?: string;
|
|||
|
|
secondaryColor?: string;
|
|||
|
|
};
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/users.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { pgTable, uuid, varchar, boolean, timestamp, jsonb, index } from 'drizzle-orm/pg-core';
|
|||
|
|
|
|||
|
|
import { ports } from './ports';
|
|||
|
|
|
|||
|
|
export const users = pgTable('users', {
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
email: varchar('email', { length: 255 }).notNull().unique(),
|
|||
|
|
name: varchar('name', { length: 255 }).notNull(),
|
|||
|
|
emailVerified: boolean('email_verified').notNull().default(false),
|
|||
|
|
image: varchar('image', { length: 500 }),
|
|||
|
|
isSuperAdmin: boolean('is_super_admin').notNull().default(false),
|
|||
|
|
isActive: boolean('is_active').notNull().default(true),
|
|||
|
|
timezone: varchar('timezone', { length: 50 }).default('America/Anguilla'),
|
|||
|
|
preferences: jsonb('preferences').$type<UserPreferences>().default({}),
|
|||
|
|
lastLoginAt: timestamp('last_login_at', { withTimezone: true }),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export type UserPreferences = {
|
|||
|
|
sidebarCollapsed?: boolean;
|
|||
|
|
darkMode?: boolean;
|
|||
|
|
defaultPortId?: string;
|
|||
|
|
notificationChannels?: Record<string, { inApp: boolean; email: boolean }>;
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
export const roles = pgTable('roles', {
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
name: varchar('name', { length: 100 }).notNull().unique(),
|
|||
|
|
description: varchar('description', { length: 500 }),
|
|||
|
|
permissions: jsonb('permissions').$type<RolePermissions>().notNull(),
|
|||
|
|
isSystem: boolean('is_system').notNull().default(false),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export const userPortRoles = pgTable(
|
|||
|
|
'user_port_roles',
|
|||
|
|
{
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
userId: uuid('user_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => users.id, { onDelete: 'cascade' }),
|
|||
|
|
portId: uuid('port_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => ports.id, { onDelete: 'cascade' }),
|
|||
|
|
roleId: uuid('role_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => roles.id, { onDelete: 'restrict' }),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
},
|
|||
|
|
(table) => [index('idx_upr_user_port').on(table.userId, table.portId)],
|
|||
|
|
);
|
|||
|
|
|
|||
|
|
export const portRoleOverrides = pgTable('port_role_overrides', {
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
portId: uuid('port_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => ports.id, { onDelete: 'cascade' }),
|
|||
|
|
roleId: uuid('role_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => roles.id, { onDelete: 'cascade' }),
|
|||
|
|
permissionOverrides: jsonb('permission_overrides').$type<Partial<RolePermissions>>().notNull(),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export const sessions = pgTable(
|
|||
|
|
'sessions',
|
|||
|
|
{
|
|||
|
|
id: uuid('id').primaryKey().defaultRandom(),
|
|||
|
|
userId: uuid('user_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => users.id, { onDelete: 'cascade' }),
|
|||
|
|
token: varchar('token', { length: 500 }).notNull().unique(),
|
|||
|
|
expiresAt: timestamp('expires_at', { withTimezone: true }).notNull(),
|
|||
|
|
ipAddress: varchar('ip_address', { length: 45 }),
|
|||
|
|
userAgent: varchar('user_agent', { length: 500 }),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
},
|
|||
|
|
(table) => [
|
|||
|
|
index('idx_sessions_token').on(table.token),
|
|||
|
|
index('idx_sessions_user').on(table.userId),
|
|||
|
|
],
|
|||
|
|
);
|
|||
|
|
|
|||
|
|
export type RolePermissions = {
|
|||
|
|
clients: {
|
|||
|
|
view: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
edit: boolean;
|
|||
|
|
delete: boolean;
|
|||
|
|
merge: boolean;
|
|||
|
|
export: boolean;
|
|||
|
|
};
|
|||
|
|
interests: {
|
|||
|
|
view: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
edit: boolean;
|
|||
|
|
delete: boolean;
|
|||
|
|
change_stage: boolean;
|
|||
|
|
generate_eoi: boolean;
|
|||
|
|
export: boolean;
|
|||
|
|
};
|
|||
|
|
berths: { view: boolean; edit: boolean; import: boolean; manage_waiting_list: boolean };
|
|||
|
|
documents: {
|
|||
|
|
view: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
send_for_signing: boolean;
|
|||
|
|
upload_signed: boolean;
|
|||
|
|
delete: boolean;
|
|||
|
|
};
|
|||
|
|
expenses: {
|
|||
|
|
view: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
edit: boolean;
|
|||
|
|
delete: boolean;
|
|||
|
|
export: boolean;
|
|||
|
|
scan_receipt: boolean;
|
|||
|
|
};
|
|||
|
|
invoices: {
|
|||
|
|
view: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
edit: boolean;
|
|||
|
|
delete: boolean;
|
|||
|
|
send: boolean;
|
|||
|
|
record_payment: boolean;
|
|||
|
|
export: boolean;
|
|||
|
|
};
|
|||
|
|
files: { view: boolean; upload: boolean; delete: boolean; manage_folders: boolean };
|
|||
|
|
email: { view: boolean; send: boolean; configure_account: boolean };
|
|||
|
|
reminders: {
|
|||
|
|
view_own: boolean;
|
|||
|
|
view_all: boolean;
|
|||
|
|
create: boolean;
|
|||
|
|
edit_own: boolean;
|
|||
|
|
edit_all: boolean;
|
|||
|
|
assign_others: boolean;
|
|||
|
|
};
|
|||
|
|
calendar: { connect: boolean; view_events: boolean };
|
|||
|
|
reports: { view_dashboard: boolean; view_analytics: boolean; export: boolean };
|
|||
|
|
document_templates: { view: boolean; generate: boolean; manage: boolean };
|
|||
|
|
admin: {
|
|||
|
|
manage_users: boolean;
|
|||
|
|
view_audit_log: boolean;
|
|||
|
|
manage_settings: boolean;
|
|||
|
|
manage_webhooks: boolean;
|
|||
|
|
manage_reports: boolean;
|
|||
|
|
manage_custom_fields: boolean;
|
|||
|
|
manage_forms: boolean;
|
|||
|
|
manage_tags: boolean;
|
|||
|
|
system_backup: boolean;
|
|||
|
|
};
|
|||
|
|
};
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
I won't reproduce the full schema for every table here (the remaining 40+ tables follow the same conventions), but the key schema files and their table lists:
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/clients.ts` — `clients`, `client_contacts`, `client_relationships`, `client_notes`, `client_tags`, `client_merge_log`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/interests.ts` — `interests`, `interest_notes`, `interest_tags`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/berths.ts` — `berths`, `berth_map_data`, `berth_recommendations`, `berth_waiting_list`, `berth_maintenance_log`, `berth_tags`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/documents.ts` — `documents`, `document_signers`, `document_events`, `document_templates`, `form_templates`, `form_submissions`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/financial.ts` — `expenses`, `invoices`, `invoice_line_items`, `invoice_expenses`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/email.ts` — `email_accounts`, `email_threads`, `email_messages`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/operations.ts` — `reminders`, `google_calendar_tokens`, `google_calendar_cache`, `notifications`, `scheduled_reports`, `report_recipients`
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/system.ts` — `audit_logs`, `tags`, `files`, `webhooks`, `webhook_deliveries`, `system_settings`, `saved_views`, `scratchpad_notes`, `user_notification_preferences`, `currency_rates`, `custom_field_definitions`, `custom_field_values`
|
|||
|
|
|
|||
|
|
**Critical:** Every port-scoped table includes `portId: uuid('port_id').notNull().references(() => ports.id, { onDelete: 'cascade' })` and an index on `port_id`. Tables with soft deletes include `archivedAt` column. All foreign keys specify `onDelete` behavior per `07-DATABASE-SCHEMA.md`.
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/relations.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { relations } from 'drizzle-orm';
|
|||
|
|
import { ports } from './ports';
|
|||
|
|
import { users, roles, userPortRoles, portRoleOverrides, sessions } from './users';
|
|||
|
|
import { clients, clientContacts, clientRelationships, clientNotes, clientTags } from './clients';
|
|||
|
|
// ... all other imports
|
|||
|
|
|
|||
|
|
export const portsRelations = relations(ports, ({ many }) => ({
|
|||
|
|
userPortRoles: many(userPortRoles),
|
|||
|
|
portRoleOverrides: many(portRoleOverrides),
|
|||
|
|
clients: many(clients),
|
|||
|
|
interests: many(interests),
|
|||
|
|
berths: many(berths),
|
|||
|
|
}));
|
|||
|
|
|
|||
|
|
export const usersRelations = relations(users, ({ many }) => ({
|
|||
|
|
sessions: many(sessions),
|
|||
|
|
portRoles: many(userPortRoles),
|
|||
|
|
}));
|
|||
|
|
|
|||
|
|
export const userPortRolesRelations = relations(userPortRoles, ({ one }) => ({
|
|||
|
|
user: one(users, { fields: [userPortRoles.userId], references: [users.id] }),
|
|||
|
|
port: one(ports, { fields: [userPortRoles.portId], references: [ports.id] }),
|
|||
|
|
role: one(roles, { fields: [userPortRoles.roleId], references: [roles.id] }),
|
|||
|
|
}));
|
|||
|
|
|
|||
|
|
// ... relations for all other tables following the same pattern
|
|||
|
|
// Every FK gets a corresponding relation() declaration
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/schema/index.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
export * from './ports';
|
|||
|
|
export * from './users';
|
|||
|
|
export * from './clients';
|
|||
|
|
export * from './interests';
|
|||
|
|
export * from './berths';
|
|||
|
|
export * from './documents';
|
|||
|
|
export * from './financial';
|
|||
|
|
export * from './email';
|
|||
|
|
export * from './operations';
|
|||
|
|
export * from './system';
|
|||
|
|
export * from './relations';
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 12: Run initial migration**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm drizzle-kit generate
|
|||
|
|
pnpm drizzle-kit push
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Verify: all 49 tables created with correct columns, constraints, indexes, and the `pgcrypto` + `uuid-ossp` extensions enabled.
|
|||
|
|
|
|||
|
|
**Step 13: Seed data**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/seed.ts`
|
|||
|
|
|
|||
|
|
Seeds:
|
|||
|
|
|
|||
|
|
1. Default port: `{ name: 'Port Nimara', slug: 'port-nimara', defaultCurrency: 'USD', timezone: 'America/Anguilla' }`
|
|||
|
|
2. Five system roles matching `10-AUTH-AND-PERMISSIONS.md` Section 2.4:
|
|||
|
|
- `super_admin` — all permissions `true`
|
|||
|
|
- `director` — all operational permissions `true`, `admin.manage_users: true`, `admin.view_audit_log: true`, system admin `false`
|
|||
|
|
- `sales_manager` — full sales access, `reminders.view_all: true`, `reminders.assign_others: true`
|
|||
|
|
- `sales_agent` — standard sales (view/create/edit clients/interests, own reminders only)
|
|||
|
|
- `viewer` — all `view` permissions `true`, everything else `false`
|
|||
|
|
3. Super admin user: `{ email: 'matt@portnimara.com', name: 'Matt', isSuperAdmin: true }` — password set via email flow
|
|||
|
|
4. Assign Matt to Port Nimara with `super_admin` role
|
|||
|
|
|
|||
|
|
Run via: `pnpm tsx src/lib/db/seed.ts`
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Day 2 — Auth + Infrastructure
|
|||
|
|
|
|||
|
|
#### Morning: Authentication (3 hours)
|
|||
|
|
|
|||
|
|
**Step 14: Better Auth setup**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add better-auth @better-auth/redis
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/auth/index.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { betterAuth } from 'better-auth';
|
|||
|
|
import { drizzleAdapter } from 'better-auth/adapters/drizzle';
|
|||
|
|
import { redis as redisPlugin } from '@better-auth/redis';
|
|||
|
|
|
|||
|
|
import { db } from '@/lib/db';
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
export const auth = betterAuth({
|
|||
|
|
database: drizzleAdapter(db),
|
|||
|
|
emailAndPassword: {
|
|||
|
|
enabled: true,
|
|||
|
|
minPasswordLength: 12,
|
|||
|
|
requireEmailVerification: false, // Admin-created accounts, no self-signup
|
|||
|
|
},
|
|||
|
|
session: {
|
|||
|
|
cookieCache: { enabled: true, maxAge: 5 * 60 },
|
|||
|
|
expiresIn: 60 * 60 * 24, // 24 hours absolute
|
|||
|
|
updateAge: 60 * 60 * 6, // Refresh in last 25%
|
|||
|
|
},
|
|||
|
|
plugins: [
|
|||
|
|
redisPlugin({ redis }), // Session store in Redis per 14-TECHNICAL-DECISIONS.md
|
|||
|
|
],
|
|||
|
|
advanced: {
|
|||
|
|
cookiePrefix: 'pn-crm',
|
|||
|
|
defaultCookieAttributes: {
|
|||
|
|
httpOnly: true,
|
|||
|
|
secure: process.env.NODE_ENV === 'production',
|
|||
|
|
sameSite: 'strict' as const,
|
|||
|
|
},
|
|||
|
|
},
|
|||
|
|
logger: {
|
|||
|
|
error: (msg) => logger.error(msg, 'auth'),
|
|||
|
|
warn: (msg) => logger.warn(msg, 'auth'),
|
|||
|
|
},
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export type Session = typeof auth.$Infer.Session;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/auth/client.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { createAuthClient } from 'better-auth/react';
|
|||
|
|
|
|||
|
|
export const authClient = createAuthClient({
|
|||
|
|
baseURL: process.env.NEXT_PUBLIC_APP_URL,
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export const { useSession, signIn, signOut } = authClient;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/app/api/auth/[...all]/route.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { auth } from '@/lib/auth';
|
|||
|
|
import { toNextJsHandler } from 'better-auth/next-js';
|
|||
|
|
|
|||
|
|
export const { GET, POST } = toNextJsHandler(auth);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 15: Auth + port context middleware for API routes**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/api/helpers.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { NextRequest, NextResponse } from 'next/server';
|
|||
|
|
import { and, eq } from 'drizzle-orm';
|
|||
|
|
|
|||
|
|
import { auth } from '@/lib/auth';
|
|||
|
|
import { db } from '@/lib/db';
|
|||
|
|
import { users, userPortRoles, roles, portRoleOverrides } from '@/lib/db/schema';
|
|||
|
|
import { type RolePermissions } from '@/lib/db/schema/users';
|
|||
|
|
import { AppError, errorResponse } from '@/lib/errors';
|
|||
|
|
import { createAuditLog } from '@/lib/audit';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
export interface AuthContext {
|
|||
|
|
userId: string;
|
|||
|
|
portId: string;
|
|||
|
|
portSlug: string;
|
|||
|
|
isSuperAdmin: boolean;
|
|||
|
|
permissions: RolePermissions | null; // null = super admin (bypasses all)
|
|||
|
|
user: {
|
|||
|
|
email: string;
|
|||
|
|
name: string;
|
|||
|
|
};
|
|||
|
|
ipAddress: string;
|
|||
|
|
userAgent: string;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
type RouteHandler<T = unknown> = (
|
|||
|
|
req: NextRequest,
|
|||
|
|
ctx: AuthContext,
|
|||
|
|
params: Record<string, string>,
|
|||
|
|
) => Promise<NextResponse<T>>;
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Composable API route wrapper.
|
|||
|
|
* Usage: export const GET = withAuth(withPermission('clients', 'view', handler))
|
|||
|
|
*/
|
|||
|
|
export function withAuth(
|
|||
|
|
handler: RouteHandler,
|
|||
|
|
): (
|
|||
|
|
req: NextRequest,
|
|||
|
|
context: { params: Promise<Record<string, string>> },
|
|||
|
|
) => Promise<NextResponse> {
|
|||
|
|
return async (req, routeContext) => {
|
|||
|
|
try {
|
|||
|
|
// 1. Validate session
|
|||
|
|
const session = await auth.api.getSession({ headers: req.headers });
|
|||
|
|
if (!session?.user) {
|
|||
|
|
return NextResponse.json({ error: 'Authentication required' }, { status: 401 });
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 2. Load user record
|
|||
|
|
const user = await db.query.users.findFirst({
|
|||
|
|
where: eq(users.id, session.user.id),
|
|||
|
|
});
|
|||
|
|
if (!user || !user.isActive) {
|
|||
|
|
return NextResponse.json({ error: 'Account disabled' }, { status: 403 });
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 3. Determine port context from header or default
|
|||
|
|
const portId = req.headers.get('X-Port-Id') || user.preferences?.defaultPortId;
|
|||
|
|
if (!portId && !user.isSuperAdmin) {
|
|||
|
|
return NextResponse.json({ error: 'Port context required' }, { status: 400 });
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 4. Load permissions (skip for super admin)
|
|||
|
|
let permissions: RolePermissions | null = null;
|
|||
|
|
let portSlug = '';
|
|||
|
|
if (!user.isSuperAdmin && portId) {
|
|||
|
|
const portRole = await db.query.userPortRoles.findFirst({
|
|||
|
|
where: and(eq(userPortRoles.userId, user.id), eq(userPortRoles.portId, portId)),
|
|||
|
|
with: { role: true, port: true },
|
|||
|
|
});
|
|||
|
|
if (!portRole) {
|
|||
|
|
return NextResponse.json({ error: 'No access to this port' }, { status: 403 });
|
|||
|
|
}
|
|||
|
|
permissions = { ...portRole.role.permissions } as RolePermissions;
|
|||
|
|
portSlug = portRole.port?.slug ?? '';
|
|||
|
|
|
|||
|
|
// Apply port overrides
|
|||
|
|
const override = await db.query.portRoleOverrides.findFirst({
|
|||
|
|
where: and(
|
|||
|
|
eq(portRoleOverrides.portId, portId),
|
|||
|
|
eq(portRoleOverrides.roleId, portRole.roleId),
|
|||
|
|
),
|
|||
|
|
});
|
|||
|
|
if (override) {
|
|||
|
|
permissions = deepMerge(permissions, override.permissionOverrides) as RolePermissions;
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
const ctx: AuthContext = {
|
|||
|
|
userId: user.id,
|
|||
|
|
portId: portId!,
|
|||
|
|
portSlug,
|
|||
|
|
isSuperAdmin: user.isSuperAdmin,
|
|||
|
|
permissions,
|
|||
|
|
user: { email: user.email, name: user.name },
|
|||
|
|
ipAddress: req.headers.get('x-forwarded-for')?.split(',')[0]?.trim() || 'unknown',
|
|||
|
|
userAgent: req.headers.get('user-agent') || 'unknown',
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
const params = await routeContext.params;
|
|||
|
|
return await handler(req, ctx, params);
|
|||
|
|
} catch (error) {
|
|||
|
|
return errorResponse(error);
|
|||
|
|
}
|
|||
|
|
};
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export function withPermission(
|
|||
|
|
resource: keyof RolePermissions,
|
|||
|
|
action: string,
|
|||
|
|
handler: RouteHandler,
|
|||
|
|
): RouteHandler {
|
|||
|
|
return async (req, ctx, params) => {
|
|||
|
|
if (!ctx.isSuperAdmin) {
|
|||
|
|
const resourcePerms = ctx.permissions?.[resource] as Record<string, boolean> | undefined;
|
|||
|
|
if (!resourcePerms || !resourcePerms[action]) {
|
|||
|
|
logger.warn({ userId: ctx.userId, resource, action }, 'Permission denied');
|
|||
|
|
await createAuditLog({
|
|||
|
|
userId: ctx.userId,
|
|||
|
|
portId: ctx.portId,
|
|||
|
|
action: 'permission_denied',
|
|||
|
|
entityType: resource,
|
|||
|
|
entityId: '',
|
|||
|
|
metadata: { attemptedAction: action },
|
|||
|
|
ipAddress: ctx.ipAddress,
|
|||
|
|
userAgent: ctx.userAgent,
|
|||
|
|
});
|
|||
|
|
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 });
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
return handler(req, ctx, params);
|
|||
|
|
};
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
function deepMerge(
|
|||
|
|
target: Record<string, unknown>,
|
|||
|
|
source: Record<string, unknown>,
|
|||
|
|
): Record<string, unknown> {
|
|||
|
|
const result = { ...target };
|
|||
|
|
for (const key of Object.keys(source)) {
|
|||
|
|
if (
|
|||
|
|
typeof source[key] === 'object' &&
|
|||
|
|
source[key] !== null &&
|
|||
|
|
typeof result[key] === 'object' &&
|
|||
|
|
result[key] !== null
|
|||
|
|
) {
|
|||
|
|
result[key] = deepMerge(
|
|||
|
|
result[key] as Record<string, unknown>,
|
|||
|
|
source[key] as Record<string, unknown>,
|
|||
|
|
);
|
|||
|
|
} else {
|
|||
|
|
result[key] = source[key];
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
return result;
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 16: Next.js page-level middleware**
|
|||
|
|
|
|||
|
|
**File:** `src/middleware.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { NextRequest, NextResponse } from 'next/server';
|
|||
|
|
|
|||
|
|
const PUBLIC_PATHS = ['/login', '/auth/', '/api/auth/', '/api/public/', '/api/health', '/scan'];
|
|||
|
|
|
|||
|
|
export function middleware(req: NextRequest) {
|
|||
|
|
const { pathname } = req.nextUrl;
|
|||
|
|
|
|||
|
|
// Public paths — no auth check
|
|||
|
|
if (PUBLIC_PATHS.some((p) => pathname.startsWith(p))) {
|
|||
|
|
return NextResponse.next();
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// Check for session cookie
|
|||
|
|
const sessionCookie = req.cookies.get('pn-crm.session_token');
|
|||
|
|
if (!sessionCookie?.value) {
|
|||
|
|
// API routes → 401 JSON
|
|||
|
|
if (pathname.startsWith('/api/')) {
|
|||
|
|
return NextResponse.json({ error: 'Authentication required' }, { status: 401 });
|
|||
|
|
}
|
|||
|
|
// Pages → redirect to login
|
|||
|
|
const loginUrl = new URL('/login', req.url);
|
|||
|
|
loginUrl.searchParams.set('redirect', pathname);
|
|||
|
|
return NextResponse.redirect(loginUrl);
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
return NextResponse.next();
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export const config = {
|
|||
|
|
matcher: ['/((?!_next/static|_next/image|favicon.ico|.*\\.(?:svg|png|jpg|jpeg|gif|webp)$).*)'],
|
|||
|
|
};
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 17: Login page**
|
|||
|
|
|
|||
|
|
**File:** `src/app/(auth)/login/page.tsx`
|
|||
|
|
|
|||
|
|
- Full-width centered card on dark navy (`#1e2844`) background
|
|||
|
|
- PN logo + "Port Nimara" text + "Marina CRM" subtitle
|
|||
|
|
- Email + password fields using shadcn `Input` + `Label` components
|
|||
|
|
- React Hook Form + Zod validation:
|
|||
|
|
```typescript
|
|||
|
|
const loginSchema = z.object({
|
|||
|
|
email: z.string().email('Invalid email'),
|
|||
|
|
password: z.string().min(1, 'Password is required'),
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
- "Forgot password?" link → `/auth/reset-password`
|
|||
|
|
- Rate limiting feedback: show lockout message after 5 failures
|
|||
|
|
- On success: redirect to `/(dashboard)/[portSlug]/` (derive slug from user's port assignment)
|
|||
|
|
- No "sign up" link — accounts are admin-created only
|
|||
|
|
- shadcn components used: `Card`, `CardHeader`, `CardContent`, `Input`, `Label`, `Button`, `Form`
|
|||
|
|
|
|||
|
|
**File:** `src/app/(auth)/set-password/page.tsx`
|
|||
|
|
|
|||
|
|
- Token from URL search params
|
|||
|
|
- New password + confirm password
|
|||
|
|
- Zod schema: min 12 chars, 1 uppercase, 1 lowercase, 1 digit, 1 special char
|
|||
|
|
```typescript
|
|||
|
|
const passwordSchema = z
|
|||
|
|
.object({
|
|||
|
|
password: z
|
|||
|
|
.string()
|
|||
|
|
.min(12, 'Minimum 12 characters')
|
|||
|
|
.regex(/[A-Z]/, 'Must contain uppercase letter')
|
|||
|
|
.regex(/[a-z]/, 'Must contain lowercase letter')
|
|||
|
|
.regex(/[0-9]/, 'Must contain digit')
|
|||
|
|
.regex(/[^A-Za-z0-9]/, 'Must contain special character'),
|
|||
|
|
confirmPassword: z.string(),
|
|||
|
|
})
|
|||
|
|
.refine((data) => data.password === data.confirmPassword, {
|
|||
|
|
message: 'Passwords must match',
|
|||
|
|
path: ['confirmPassword'],
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/app/(auth)/reset-password/page.tsx`
|
|||
|
|
|
|||
|
|
- Email input form → submit → same success message regardless of whether email exists
|
|||
|
|
|
|||
|
|
**File:** `src/app/(auth)/layout.tsx`
|
|||
|
|
|
|||
|
|
- Dark navy background, centered content, no sidebar/topbar
|
|||
|
|
- Port Nimara branding
|
|||
|
|
|
|||
|
|
#### Afternoon: Infrastructure (3 hours)
|
|||
|
|
|
|||
|
|
**Step 18: Redis connection**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add ioredis
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/redis.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import Redis from 'ioredis';
|
|||
|
|
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
const redisUrl = process.env.REDIS_URL!;
|
|||
|
|
|
|||
|
|
export const redis = new Redis(redisUrl, {
|
|||
|
|
maxRetriesPerRequest: 3,
|
|||
|
|
retryStrategy(times) {
|
|||
|
|
const delay = Math.min(times * 200, 2000);
|
|||
|
|
return delay;
|
|||
|
|
},
|
|||
|
|
lazyConnect: true,
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
redis.on('error', (err) => logger.error({ err }, 'Redis connection error'));
|
|||
|
|
redis.on('connect', () => logger.info('Redis connected'));
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 19: Application-level rate limiter**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/rate-limit.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { NextResponse } from 'next/server';
|
|||
|
|
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
|
|||
|
|
interface RateLimitConfig {
|
|||
|
|
windowMs: number;
|
|||
|
|
max: number;
|
|||
|
|
keyPrefix: string;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
interface RateLimitResult {
|
|||
|
|
allowed: boolean;
|
|||
|
|
limit: number;
|
|||
|
|
remaining: number;
|
|||
|
|
resetAt: number;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export async function checkRateLimit(
|
|||
|
|
identifier: string,
|
|||
|
|
config: RateLimitConfig,
|
|||
|
|
): Promise<RateLimitResult> {
|
|||
|
|
const key = `rl:${config.keyPrefix}:${identifier}`;
|
|||
|
|
const now = Date.now();
|
|||
|
|
const windowStart = now - config.windowMs;
|
|||
|
|
|
|||
|
|
const pipeline = redis.pipeline();
|
|||
|
|
pipeline.zremrangebyscore(key, '-inf', windowStart);
|
|||
|
|
pipeline.zadd(key, now.toString(), `${now}:${Math.random()}`);
|
|||
|
|
pipeline.zcard(key);
|
|||
|
|
pipeline.pexpire(key, config.windowMs);
|
|||
|
|
const results = await pipeline.exec();
|
|||
|
|
|
|||
|
|
const count = (results?.[2]?.[1] as number) ?? 0;
|
|||
|
|
const remaining = Math.max(0, config.max - count);
|
|||
|
|
|
|||
|
|
return {
|
|||
|
|
allowed: count <= config.max,
|
|||
|
|
limit: config.max,
|
|||
|
|
remaining,
|
|||
|
|
resetAt: now + config.windowMs,
|
|||
|
|
};
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export function rateLimitHeaders(result: RateLimitResult): Record<string, string> {
|
|||
|
|
return {
|
|||
|
|
'X-RateLimit-Limit': result.limit.toString(),
|
|||
|
|
'X-RateLimit-Remaining': result.remaining.toString(),
|
|||
|
|
'X-RateLimit-Reset': Math.ceil(result.resetAt / 1000).toString(),
|
|||
|
|
};
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// Pre-configured limiters
|
|||
|
|
export const rateLimiters = {
|
|||
|
|
auth: { windowMs: 15 * 60 * 1000, max: 5, keyPrefix: 'auth' },
|
|||
|
|
api: { windowMs: 60 * 1000, max: 120, keyPrefix: 'api' },
|
|||
|
|
upload: { windowMs: 60 * 1000, max: 10, keyPrefix: 'upload' },
|
|||
|
|
bulk: { windowMs: 60 * 1000, max: 5, keyPrefix: 'bulk' },
|
|||
|
|
} as const;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 20: Structured logger**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add pino pino-pretty
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/logger.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import pino from 'pino';
|
|||
|
|
|
|||
|
|
export const logger = pino({
|
|||
|
|
level: process.env.LOG_LEVEL || 'info',
|
|||
|
|
redact: {
|
|||
|
|
paths: [
|
|||
|
|
'password',
|
|||
|
|
'token',
|
|||
|
|
'secret',
|
|||
|
|
'accessKey',
|
|||
|
|
'secretKey',
|
|||
|
|
'creditCard',
|
|||
|
|
'*.password',
|
|||
|
|
'*.token',
|
|||
|
|
],
|
|||
|
|
censor: '[REDACTED]',
|
|||
|
|
},
|
|||
|
|
transport:
|
|||
|
|
process.env.NODE_ENV !== 'production'
|
|||
|
|
? { target: 'pino-pretty', options: { colorize: true } }
|
|||
|
|
: undefined,
|
|||
|
|
serializers: {
|
|||
|
|
err: pino.stdSerializers.err,
|
|||
|
|
req: pino.stdSerializers.req,
|
|||
|
|
},
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 21: Error handling**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/errors.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { NextResponse } from 'next/server';
|
|||
|
|
import { ZodError } from 'zod';
|
|||
|
|
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
export class AppError extends Error {
|
|||
|
|
constructor(
|
|||
|
|
public statusCode: number,
|
|||
|
|
message: string,
|
|||
|
|
public code?: string,
|
|||
|
|
) {
|
|||
|
|
super(message);
|
|||
|
|
this.name = 'AppError';
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export class NotFoundError extends AppError {
|
|||
|
|
constructor(entity: string) {
|
|||
|
|
super(404, `${entity} not found`, 'NOT_FOUND');
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export class ForbiddenError extends AppError {
|
|||
|
|
constructor(message = 'Access denied') {
|
|||
|
|
super(403, message, 'FORBIDDEN');
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export class ValidationError extends AppError {
|
|||
|
|
constructor(
|
|||
|
|
message: string,
|
|||
|
|
public details?: Array<{ field: string; message: string }>,
|
|||
|
|
) {
|
|||
|
|
super(400, message, 'VALIDATION_ERROR');
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export class ConflictError extends AppError {
|
|||
|
|
constructor(message: string) {
|
|||
|
|
super(409, message, 'CONFLICT');
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export class RateLimitError extends AppError {
|
|||
|
|
constructor(public retryAfter: number) {
|
|||
|
|
super(429, 'Too many requests', 'RATE_LIMITED');
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export function errorResponse(error: unknown): NextResponse {
|
|||
|
|
if (error instanceof AppError) {
|
|||
|
|
const body: Record<string, unknown> = { error: error.message, code: error.code };
|
|||
|
|
if (error instanceof ValidationError && error.details) {
|
|||
|
|
body.details = error.details;
|
|||
|
|
}
|
|||
|
|
if (error instanceof RateLimitError) {
|
|||
|
|
body.retryAfter = error.retryAfter;
|
|||
|
|
}
|
|||
|
|
return NextResponse.json(body, { status: error.statusCode });
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
if (error instanceof ZodError) {
|
|||
|
|
return NextResponse.json(
|
|||
|
|
{
|
|||
|
|
error: 'Validation failed',
|
|||
|
|
code: 'VALIDATION_ERROR',
|
|||
|
|
details: error.errors.map((e) => ({
|
|||
|
|
field: e.path.join('.'),
|
|||
|
|
message: e.message,
|
|||
|
|
})),
|
|||
|
|
},
|
|||
|
|
{ status: 400 },
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// Never leak internals
|
|||
|
|
logger.error({ err: error }, 'Unhandled error');
|
|||
|
|
return NextResponse.json({ error: 'Internal server error' }, { status: 500 });
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 22: Audit log module**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/audit.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { db } from '@/lib/db';
|
|||
|
|
import { auditLogs } from '@/lib/db/schema';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
interface AuditLogParams {
|
|||
|
|
userId: string;
|
|||
|
|
portId: string;
|
|||
|
|
action:
|
|||
|
|
| 'create'
|
|||
|
|
| 'update'
|
|||
|
|
| 'delete'
|
|||
|
|
| 'archive'
|
|||
|
|
| 'restore'
|
|||
|
|
| 'merge'
|
|||
|
|
| 'login'
|
|||
|
|
| 'logout'
|
|||
|
|
| 'permission_denied'
|
|||
|
|
| 'revert';
|
|||
|
|
entityType: string;
|
|||
|
|
entityId: string;
|
|||
|
|
fieldChanged?: string;
|
|||
|
|
oldValue?: Record<string, unknown>;
|
|||
|
|
newValue?: Record<string, unknown>;
|
|||
|
|
metadata?: Record<string, unknown>;
|
|||
|
|
ipAddress: string;
|
|||
|
|
userAgent: string;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export async function createAuditLog(params: AuditLogParams): Promise<void> {
|
|||
|
|
try {
|
|||
|
|
await db.insert(auditLogs).values({
|
|||
|
|
portId: params.portId || null,
|
|||
|
|
userId: params.userId || null,
|
|||
|
|
action: params.action,
|
|||
|
|
entityType: params.entityType,
|
|||
|
|
entityId: params.entityId,
|
|||
|
|
fieldChanged: params.fieldChanged,
|
|||
|
|
oldValue: maskSensitiveFields(params.oldValue),
|
|||
|
|
newValue: maskSensitiveFields(params.newValue),
|
|||
|
|
metadata: params.metadata,
|
|||
|
|
ipAddress: params.ipAddress,
|
|||
|
|
userAgent: params.userAgent,
|
|||
|
|
});
|
|||
|
|
} catch (err) {
|
|||
|
|
// Audit logging must never crash the parent operation
|
|||
|
|
logger.error(
|
|||
|
|
{ err, params: { ...params, oldValue: undefined, newValue: undefined } },
|
|||
|
|
'Failed to write audit log',
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Computes field-level diff between old and new records.
|
|||
|
|
* Returns an array of { field, oldValue, newValue } for changed fields.
|
|||
|
|
*/
|
|||
|
|
export function diffFields(
|
|||
|
|
oldRecord: Record<string, unknown>,
|
|||
|
|
newRecord: Record<string, unknown>,
|
|||
|
|
): Array<{ field: string; oldValue: unknown; newValue: unknown }> {
|
|||
|
|
const changes: Array<{ field: string; oldValue: unknown; newValue: unknown }> = [];
|
|||
|
|
for (const key of Object.keys(newRecord)) {
|
|||
|
|
if (JSON.stringify(oldRecord[key]) !== JSON.stringify(newRecord[key])) {
|
|||
|
|
changes.push({ field: key, oldValue: oldRecord[key], newValue: newRecord[key] });
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
return changes;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
const SENSITIVE_FIELDS = new Set(['email', 'phone', 'password', 'credentials_enc', 'token']);
|
|||
|
|
|
|||
|
|
function maskSensitiveFields(data?: Record<string, unknown>): Record<string, unknown> | undefined {
|
|||
|
|
if (!data) return undefined;
|
|||
|
|
const masked = { ...data };
|
|||
|
|
for (const key of Object.keys(masked)) {
|
|||
|
|
if (SENSITIVE_FIELDS.has(key) && typeof masked[key] === 'string') {
|
|||
|
|
const val = masked[key] as string;
|
|||
|
|
masked[key] = val.length > 4 ? `${val.slice(0, 2)}***${val.slice(-2)}` : '***';
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
return masked;
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 23: Database utility functions**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/db/utils.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { sql, eq, and } from 'drizzle-orm';
|
|||
|
|
import type { PgTable } from 'drizzle-orm/pg-core';
|
|||
|
|
|
|||
|
|
import { db, type Database } from '@/lib/db';
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Transaction wrapper with automatic rollback on error.
|
|||
|
|
*/
|
|||
|
|
export async function withTransaction<T>(fn: (tx: Database) => Promise<T>): Promise<T> {
|
|||
|
|
return db.transaction(fn);
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Soft delete: sets archived_at timestamp.
|
|||
|
|
* The table must have an `archivedAt` column and `portId` column.
|
|||
|
|
*/
|
|||
|
|
export async function softDelete(
|
|||
|
|
table: PgTable & { archivedAt: any; id: any; portId: any },
|
|||
|
|
id: string,
|
|||
|
|
portId: string,
|
|||
|
|
): Promise<void> {
|
|||
|
|
await db
|
|||
|
|
.update(table)
|
|||
|
|
.set({ archivedAt: sql`now()` } as any)
|
|||
|
|
.where(and(eq(table.id, id), eq(table.portId, portId)));
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Restore: clears archived_at timestamp.
|
|||
|
|
*/
|
|||
|
|
export async function restore(
|
|||
|
|
table: PgTable & { archivedAt: any; id: any; portId: any },
|
|||
|
|
id: string,
|
|||
|
|
portId: string,
|
|||
|
|
): Promise<void> {
|
|||
|
|
await db
|
|||
|
|
.update(table)
|
|||
|
|
.set({ archivedAt: null } as any)
|
|||
|
|
.where(and(eq(table.id, id), eq(table.portId, portId)));
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 24: MinIO client**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add minio
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/minio/index.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { Client } from 'minio';
|
|||
|
|
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
export const minioClient = new Client({
|
|||
|
|
endPoint: process.env.MINIO_ENDPOINT!,
|
|||
|
|
port: parseInt(process.env.MINIO_PORT!, 10),
|
|||
|
|
useSSL: process.env.MINIO_USE_SSL === 'true',
|
|||
|
|
accessKey: process.env.MINIO_ACCESS_KEY!,
|
|||
|
|
secretKey: process.env.MINIO_SECRET_KEY!,
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
const BUCKET = process.env.MINIO_BUCKET!;
|
|||
|
|
|
|||
|
|
export async function ensureBucket(): Promise<void> {
|
|||
|
|
try {
|
|||
|
|
const exists = await minioClient.bucketExists(BUCKET);
|
|||
|
|
if (!exists) {
|
|||
|
|
await minioClient.makeBucket(BUCKET);
|
|||
|
|
logger.info({ bucket: BUCKET }, 'MinIO bucket created');
|
|||
|
|
}
|
|||
|
|
} catch (err) {
|
|||
|
|
logger.error({ err }, 'Failed to ensure MinIO bucket');
|
|||
|
|
throw err;
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Generate presigned download URL (15-minute expiry per SECURITY-GUIDELINES.md)
|
|||
|
|
*/
|
|||
|
|
export async function getPresignedUrl(objectKey: string, expirySeconds = 900): Promise<string> {
|
|||
|
|
return minioClient.presignedGetObject(BUCKET, objectKey, expirySeconds);
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Build storage path from components (no user input in path — UUIDs only).
|
|||
|
|
* Format: {portSlug}/{entity}/{entityId}/{uuid}.{ext}
|
|||
|
|
*/
|
|||
|
|
export function buildStoragePath(
|
|||
|
|
portSlug: string,
|
|||
|
|
entity: string,
|
|||
|
|
entityId: string,
|
|||
|
|
fileId: string,
|
|||
|
|
extension: string,
|
|||
|
|
): string {
|
|||
|
|
return `${portSlug}/${entity}/${entityId}/${fileId}.${extension}`;
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 25: Health check endpoint**
|
|||
|
|
|
|||
|
|
**File:** `src/app/api/health/route.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { NextResponse } from 'next/server';
|
|||
|
|
|
|||
|
|
import { db } from '@/lib/db';
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
import { minioClient } from '@/lib/minio';
|
|||
|
|
import { sql } from 'drizzle-orm';
|
|||
|
|
|
|||
|
|
export async function GET() {
|
|||
|
|
const checks: Record<string, 'ok' | 'error'> = {};
|
|||
|
|
|
|||
|
|
try {
|
|||
|
|
await db.execute(sql`SELECT 1`);
|
|||
|
|
checks.postgres = 'ok';
|
|||
|
|
} catch {
|
|||
|
|
checks.postgres = 'error';
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
try {
|
|||
|
|
await redis.ping();
|
|||
|
|
checks.redis = 'ok';
|
|||
|
|
} catch {
|
|||
|
|
checks.redis = 'error';
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
try {
|
|||
|
|
await minioClient.bucketExists(process.env.MINIO_BUCKET!);
|
|||
|
|
checks.minio = 'ok';
|
|||
|
|
} catch {
|
|||
|
|
checks.minio = 'error';
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
const allOk = Object.values(checks).every((v) => v === 'ok');
|
|||
|
|
return NextResponse.json(
|
|||
|
|
{ status: allOk ? 'healthy' : 'degraded', checks, timestamp: new Date().toISOString() },
|
|||
|
|
{ status: allOk ? 200 : 503 },
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Day 3 — Socket.io + BullMQ + Layout Shell (Part 1)
|
|||
|
|
|
|||
|
|
#### Morning: Real-time + Background Jobs (3 hours)
|
|||
|
|
|
|||
|
|
**Step 26: Socket.io server**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add socket.io socket.io-client @socket.io/redis-adapter
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/socket/events.ts`
|
|||
|
|
|
|||
|
|
Type-safe event definitions matching `11-REALTIME-AND-BACKGROUND-JOBS.md` Section 2:
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
// Server → Client events
|
|||
|
|
export interface ServerToClientEvents {
|
|||
|
|
// Berth events
|
|||
|
|
'berth:statusChanged': (payload: {
|
|||
|
|
berthId: string;
|
|||
|
|
oldStatus: string;
|
|||
|
|
newStatus: string;
|
|||
|
|
triggeredBy: string;
|
|||
|
|
}) => void;
|
|||
|
|
'berth:updated': (payload: { berthId: string; changedFields: string[] }) => void;
|
|||
|
|
'berth:waitingListChanged': (payload: {
|
|||
|
|
berthId: string;
|
|||
|
|
action: string;
|
|||
|
|
entry: unknown;
|
|||
|
|
}) => void;
|
|||
|
|
'berth:maintenanceAdded': (payload: { berthId: string; logEntry: unknown }) => void;
|
|||
|
|
|
|||
|
|
// Client events
|
|||
|
|
'client:created': (payload: { clientId: string; clientName: string; source: string }) => void;
|
|||
|
|
'client:updated': (payload: { clientId: string; changedFields: string[] }) => void;
|
|||
|
|
'client:archived': (payload: { clientId: string }) => void;
|
|||
|
|
'client:restored': (payload: { clientId: string }) => void;
|
|||
|
|
'client:merged': (payload: { survivingId: string; mergedId: string }) => void;
|
|||
|
|
'client:noteAdded': (payload: {
|
|||
|
|
clientId: string;
|
|||
|
|
noteId: string;
|
|||
|
|
authorName: string;
|
|||
|
|
preview: string;
|
|||
|
|
}) => void;
|
|||
|
|
'client:duplicateDetected': (payload: {
|
|||
|
|
clientAId: string;
|
|||
|
|
clientBId: string;
|
|||
|
|
score: number;
|
|||
|
|
reason: string;
|
|||
|
|
}) => void;
|
|||
|
|
|
|||
|
|
// Interest events
|
|||
|
|
'interest:created': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
clientId: string;
|
|||
|
|
berthId: string | null;
|
|||
|
|
source: string;
|
|||
|
|
}) => void;
|
|||
|
|
'interest:updated': (payload: { interestId: string; changedFields: string[] }) => void;
|
|||
|
|
'interest:stageChanged': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
oldStage: string;
|
|||
|
|
newStage: string;
|
|||
|
|
clientName: string;
|
|||
|
|
berthNumber: string;
|
|||
|
|
}) => void;
|
|||
|
|
'interest:berthLinked': (payload: { interestId: string; berthId: string }) => void;
|
|||
|
|
'interest:berthUnlinked': (payload: { interestId: string; berthId: string }) => void;
|
|||
|
|
'interest:archived': (payload: { interestId: string }) => void;
|
|||
|
|
'interest:noteAdded': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
noteId: string;
|
|||
|
|
authorName: string;
|
|||
|
|
preview: string;
|
|||
|
|
}) => void;
|
|||
|
|
'interest:recommendationsGenerated': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
count: number;
|
|||
|
|
topBerthId: string;
|
|||
|
|
}) => void;
|
|||
|
|
'interest:recommendationAdded': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
berthId: string;
|
|||
|
|
source: string;
|
|||
|
|
matchScore: number;
|
|||
|
|
}) => void;
|
|||
|
|
'interest:leadCategoryChanged': (payload: {
|
|||
|
|
interestId: string;
|
|||
|
|
oldCategory: string;
|
|||
|
|
newCategory: string;
|
|||
|
|
auto: boolean;
|
|||
|
|
}) => void;
|
|||
|
|
|
|||
|
|
// Document events
|
|||
|
|
'document:created': (payload: { documentId: string; type: string; interestId: string }) => void;
|
|||
|
|
'document:sent': (payload: { documentId: string; type: string; signerCount: number }) => void;
|
|||
|
|
'document:signed': (payload: {
|
|||
|
|
documentId: string;
|
|||
|
|
signerName: string;
|
|||
|
|
signerRole: string;
|
|||
|
|
remainingSigners: number;
|
|||
|
|
}) => void;
|
|||
|
|
'document:completed': (payload: {
|
|||
|
|
documentId: string;
|
|||
|
|
type: string;
|
|||
|
|
interestId: string;
|
|||
|
|
clientName: string;
|
|||
|
|
}) => void;
|
|||
|
|
'document:expired': (payload: { documentId: string }) => void;
|
|||
|
|
'document:reminderSent': (payload: { documentId: string; recipientEmail: string }) => void;
|
|||
|
|
|
|||
|
|
// Financial events
|
|||
|
|
'expense:created': (payload: {
|
|||
|
|
expenseId: string;
|
|||
|
|
amount: number;
|
|||
|
|
currency: string;
|
|||
|
|
category: string;
|
|||
|
|
}) => void;
|
|||
|
|
'expense:updated': (payload: { expenseId: string; changedFields: string[] }) => void;
|
|||
|
|
'invoice:created': (payload: {
|
|||
|
|
invoiceId: string;
|
|||
|
|
invoiceNumber: string;
|
|||
|
|
total: number;
|
|||
|
|
clientName: string;
|
|||
|
|
}) => void;
|
|||
|
|
'invoice:sent': (payload: {
|
|||
|
|
invoiceId: string;
|
|||
|
|
invoiceNumber: string;
|
|||
|
|
recipientEmail: string;
|
|||
|
|
}) => void;
|
|||
|
|
'invoice:paid': (payload: { invoiceId: string; invoiceNumber: string; amount: number }) => void;
|
|||
|
|
'invoice:overdue': (payload: {
|
|||
|
|
invoiceId: string;
|
|||
|
|
invoiceNumber: string;
|
|||
|
|
daysPastDue: number;
|
|||
|
|
}) => void;
|
|||
|
|
|
|||
|
|
// Reminder & Calendar events
|
|||
|
|
'reminder:created': (payload: {
|
|||
|
|
reminderId: string;
|
|||
|
|
title: string;
|
|||
|
|
assignedTo: string;
|
|||
|
|
dueAt: string;
|
|||
|
|
}) => void;
|
|||
|
|
'reminder:updated': (payload: { reminderId: string; changedFields: string[] }) => void;
|
|||
|
|
'reminder:completed': (payload: {
|
|||
|
|
reminderId: string;
|
|||
|
|
title: string;
|
|||
|
|
completedBy: string;
|
|||
|
|
}) => void;
|
|||
|
|
'reminder:overdue': (payload: { reminderId: string; title: string; dueAt: string }) => void;
|
|||
|
|
'reminder:snoozed': (payload: { reminderId: string; snoozedUntil: string }) => void;
|
|||
|
|
'calendar:synced': (payload: { eventCount: number; lastSyncAt: string }) => void;
|
|||
|
|
'calendar:disconnected': (payload: { reason: string }) => void;
|
|||
|
|
|
|||
|
|
// Notification events
|
|||
|
|
'notification:new': (payload: {
|
|||
|
|
notificationId: string;
|
|||
|
|
type: string;
|
|||
|
|
title: string;
|
|||
|
|
description: string;
|
|||
|
|
link: string;
|
|||
|
|
}) => void;
|
|||
|
|
'notification:unreadCount': (payload: { count: number }) => void;
|
|||
|
|
|
|||
|
|
// System events
|
|||
|
|
'system:alert': (payload: { alertType: string; message: string; severity: string }) => void;
|
|||
|
|
'system:jobFailed': (payload: { queueName: string; jobId: string; error: string }) => void;
|
|||
|
|
'registration:new': (payload: {
|
|||
|
|
clientId: string;
|
|||
|
|
interestId: string;
|
|||
|
|
clientName: string;
|
|||
|
|
berthNumber: string;
|
|||
|
|
}) => void;
|
|||
|
|
|
|||
|
|
// File events
|
|||
|
|
'file:uploaded': (payload: {
|
|||
|
|
fileId: string;
|
|||
|
|
filename: string;
|
|||
|
|
clientId: string;
|
|||
|
|
category: string;
|
|||
|
|
}) => void;
|
|||
|
|
'file:deleted': (payload: { fileId: string; filename: string }) => void;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// Client → Server events (minimal — most actions go through REST API)
|
|||
|
|
export interface ClientToServerEvents {
|
|||
|
|
'join:entity': (payload: { type: 'berth' | 'client' | 'interest'; id: string }) => void;
|
|||
|
|
'leave:entity': (payload: { type: 'berth' | 'client' | 'interest'; id: string }) => void;
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/socket/server.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { Server } from 'socket.io';
|
|||
|
|
import { createAdapter } from '@socket.io/redis-adapter';
|
|||
|
|
import type { Server as HTTPServer } from 'node:http';
|
|||
|
|
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
import { auth } from '@/lib/auth';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
import type { ServerToClientEvents, ClientToServerEvents } from './events';
|
|||
|
|
|
|||
|
|
let io: Server<ClientToServerEvents, ServerToClientEvents> | null = null;
|
|||
|
|
|
|||
|
|
export function initSocketServer(
|
|||
|
|
httpServer: HTTPServer,
|
|||
|
|
): Server<ClientToServerEvents, ServerToClientEvents> {
|
|||
|
|
const pubClient = redis.duplicate();
|
|||
|
|
const subClient = redis.duplicate();
|
|||
|
|
|
|||
|
|
io = new Server<ClientToServerEvents, ServerToClientEvents>(httpServer, {
|
|||
|
|
path: '/socket.io/',
|
|||
|
|
adapter: createAdapter(pubClient, subClient),
|
|||
|
|
cors: {
|
|||
|
|
origin: process.env.APP_URL,
|
|||
|
|
credentials: true,
|
|||
|
|
},
|
|||
|
|
connectionStateRecovery: { maxDisconnectionDuration: 2 * 60 * 1000 },
|
|||
|
|
maxHttpBufferSize: 1e6, // 1MB message limit per SECURITY-GUIDELINES.md
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
// Auth middleware — validate session cookie
|
|||
|
|
io.use(async (socket, next) => {
|
|||
|
|
try {
|
|||
|
|
const cookie = socket.handshake.headers.cookie;
|
|||
|
|
if (!cookie) return next(new Error('Authentication required'));
|
|||
|
|
|
|||
|
|
// Parse session from cookie
|
|||
|
|
const session = await auth.api.getSession({
|
|||
|
|
headers: new Headers({ cookie }),
|
|||
|
|
});
|
|||
|
|
if (!session?.user) return next(new Error('Invalid session'));
|
|||
|
|
|
|||
|
|
// Enforce max 10 connections per user
|
|||
|
|
const userSockets = await io!.in(`user:${session.user.id}`).fetchSockets();
|
|||
|
|
if (userSockets.length >= 10) {
|
|||
|
|
return next(new Error('Maximum connections reached'));
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
socket.data = {
|
|||
|
|
userId: session.user.id,
|
|||
|
|
portId: socket.handshake.auth.portId,
|
|||
|
|
};
|
|||
|
|
next();
|
|||
|
|
} catch {
|
|||
|
|
next(new Error('Authentication failed'));
|
|||
|
|
}
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
// Connection handler
|
|||
|
|
io.on('connection', (socket) => {
|
|||
|
|
const { userId, portId } = socket.data;
|
|||
|
|
logger.debug({ userId, portId }, 'Socket connected');
|
|||
|
|
|
|||
|
|
// Auto-join rooms
|
|||
|
|
socket.join(`user:${userId}`);
|
|||
|
|
if (portId) socket.join(`port:${portId}`);
|
|||
|
|
|
|||
|
|
// Entity-level room management
|
|||
|
|
socket.on('join:entity', ({ type, id }) => {
|
|||
|
|
socket.join(`${type}:${id}`);
|
|||
|
|
});
|
|||
|
|
socket.on('leave:entity', ({ type, id }) => {
|
|||
|
|
socket.leave(`${type}:${id}`);
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
// Idle timeout (30 seconds per SECURITY-GUIDELINES.md)
|
|||
|
|
let idleTimer = setTimeout(() => socket.disconnect(), 30_000);
|
|||
|
|
socket.onAny(() => {
|
|||
|
|
clearTimeout(idleTimer);
|
|||
|
|
idleTimer = setTimeout(() => socket.disconnect(), 30_000);
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
socket.on('disconnect', () => {
|
|||
|
|
clearTimeout(idleTimer);
|
|||
|
|
logger.debug({ userId }, 'Socket disconnected');
|
|||
|
|
});
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
return io;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export function getIO(): Server<ClientToServerEvents, ServerToClientEvents> {
|
|||
|
|
if (!io) throw new Error('Socket.io not initialized');
|
|||
|
|
return io;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Emit an event to a specific room. Used by service layer after mutations.
|
|||
|
|
*/
|
|||
|
|
export function emitToRoom<E extends keyof ServerToClientEvents>(
|
|||
|
|
room: string,
|
|||
|
|
event: E,
|
|||
|
|
...args: Parameters<ServerToClientEvents[E]>
|
|||
|
|
): void {
|
|||
|
|
if (!io) return;
|
|||
|
|
io.to(room).emit(event, ...args);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/providers/socket-provider.tsx`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
'use client';
|
|||
|
|
|
|||
|
|
import { createContext, useContext, useEffect, useState, type ReactNode } from 'react';
|
|||
|
|
import { io, type Socket } from 'socket.io-client';
|
|||
|
|
|
|||
|
|
import { useSession } from '@/lib/auth/client';
|
|||
|
|
import { usePortStore } from '@/stores/ui-store';
|
|||
|
|
|
|||
|
|
const SocketContext = createContext<Socket | null>(null);
|
|||
|
|
|
|||
|
|
export function SocketProvider({ children }: { children: ReactNode }) {
|
|||
|
|
const { data: session } = useSession();
|
|||
|
|
const currentPortId = usePortStore((s) => s.currentPortId);
|
|||
|
|
const [socket, setSocket] = useState<Socket | null>(null);
|
|||
|
|
|
|||
|
|
useEffect(() => {
|
|||
|
|
if (!session?.user || !currentPortId) return;
|
|||
|
|
|
|||
|
|
const s = io(process.env.NEXT_PUBLIC_APP_URL!, {
|
|||
|
|
path: '/socket.io/',
|
|||
|
|
withCredentials: true,
|
|||
|
|
auth: { portId: currentPortId },
|
|||
|
|
transports: ['websocket', 'polling'],
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
s.on('connect', () => setSocket(s));
|
|||
|
|
s.on('disconnect', () => setSocket(null));
|
|||
|
|
|
|||
|
|
return () => {
|
|||
|
|
s.disconnect();
|
|||
|
|
setSocket(null);
|
|||
|
|
};
|
|||
|
|
}, [session?.user, currentPortId]);
|
|||
|
|
|
|||
|
|
return (
|
|||
|
|
<SocketContext.Provider value={socket}>
|
|||
|
|
{children}
|
|||
|
|
</SocketContext.Provider>
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export function useSocket() {
|
|||
|
|
return useContext(SocketContext);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 27: BullMQ setup**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add bullmq
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/queue/index.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { Queue } from 'bullmq';
|
|||
|
|
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
|
|||
|
|
// 10 queues matching 11-REALTIME-AND-BACKGROUND-JOBS.md Section 3.1
|
|||
|
|
const QUEUE_CONFIGS = {
|
|||
|
|
email: { concurrency: 5, maxAttempts: 5 },
|
|||
|
|
documents: { concurrency: 3, maxAttempts: 5 },
|
|||
|
|
notifications: { concurrency: 10, maxAttempts: 3 },
|
|||
|
|
import: { concurrency: 1, maxAttempts: 1 },
|
|||
|
|
export: { concurrency: 2, maxAttempts: 3 },
|
|||
|
|
reports: { concurrency: 1, maxAttempts: 3 },
|
|||
|
|
webhooks: { concurrency: 5, maxAttempts: 3 },
|
|||
|
|
maintenance: { concurrency: 1, maxAttempts: 3 },
|
|||
|
|
ai: { concurrency: 2, maxAttempts: 3 },
|
|||
|
|
bulk: { concurrency: 2, maxAttempts: 3 },
|
|||
|
|
} as const;
|
|||
|
|
|
|||
|
|
export type QueueName = keyof typeof QUEUE_CONFIGS;
|
|||
|
|
|
|||
|
|
const queues = new Map<QueueName, Queue>();
|
|||
|
|
|
|||
|
|
export function getQueue(name: QueueName): Queue {
|
|||
|
|
let queue = queues.get(name);
|
|||
|
|
if (!queue) {
|
|||
|
|
queue = new Queue(name, {
|
|||
|
|
connection: redis,
|
|||
|
|
defaultJobOptions: {
|
|||
|
|
attempts: QUEUE_CONFIGS[name].maxAttempts,
|
|||
|
|
backoff: { type: 'exponential', delay: 1000 },
|
|||
|
|
removeOnComplete: { age: 24 * 3600 },
|
|||
|
|
removeOnFail: { age: 7 * 24 * 3600 },
|
|||
|
|
},
|
|||
|
|
});
|
|||
|
|
queues.set(name, queue);
|
|||
|
|
}
|
|||
|
|
return queue;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export { QUEUE_CONFIGS };
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/queue/scheduler.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { getQueue } from './index';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
/**
|
|||
|
|
* Register all recurring jobs from 11-REALTIME-AND-BACKGROUND-JOBS.md Section 3.2
|
|||
|
|
* Called once on server startup.
|
|||
|
|
*/
|
|||
|
|
export async function registerRecurringJobs(): Promise<void> {
|
|||
|
|
const recurring = [
|
|||
|
|
{ queue: 'documents', name: 'signature-poll', pattern: '0 */6 * * *' },
|
|||
|
|
{ queue: 'notifications', name: 'reminder-check', pattern: '0 * * * *' },
|
|||
|
|
{ queue: 'notifications', name: 'reminder-overdue-check', pattern: '*/15 * * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'calendar-sync', pattern: '*/30 * * * *' },
|
|||
|
|
{ queue: 'notifications', name: 'invoice-overdue-check', pattern: '0 8 * * *' },
|
|||
|
|
{ queue: 'notifications', name: 'tenure-expiry-check', pattern: '0 8 * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'currency-refresh', pattern: '0 */6 * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'database-backup', pattern: '0 2 * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'backup-cleanup', pattern: '0 3 * * 0' },
|
|||
|
|
{ queue: 'maintenance', name: 'session-cleanup', pattern: '0 4 * * *' },
|
|||
|
|
{ queue: 'reports', name: 'report-scheduler', pattern: '* * * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'temp-file-cleanup', pattern: '0 5 * * *' },
|
|||
|
|
{ queue: 'maintenance', name: 'form-expiry-check', pattern: '0 * * * *' },
|
|||
|
|
] as const;
|
|||
|
|
|
|||
|
|
for (const job of recurring) {
|
|||
|
|
const queue = getQueue(job.queue as any);
|
|||
|
|
await queue.upsertJobScheduler(
|
|||
|
|
job.name,
|
|||
|
|
{ pattern: job.pattern },
|
|||
|
|
{ data: {}, name: job.name },
|
|||
|
|
);
|
|||
|
|
logger.info(
|
|||
|
|
{ queue: job.queue, job: job.name, pattern: job.pattern },
|
|||
|
|
'Registered recurring job',
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/lib/queue/workers/` — one stub file per queue (e.g., `email.ts`, `documents.ts`, etc.):
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
// src/lib/queue/workers/email.ts
|
|||
|
|
import { Worker, type Job } from 'bullmq';
|
|||
|
|
import { redis } from '@/lib/redis';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
export const emailWorker = new Worker(
|
|||
|
|
'email',
|
|||
|
|
async (job: Job) => {
|
|||
|
|
logger.info({ jobId: job.id, jobName: job.name }, 'Processing email job');
|
|||
|
|
// TODO: implement in Layer 2
|
|||
|
|
},
|
|||
|
|
{
|
|||
|
|
connection: redis,
|
|||
|
|
concurrency: 5,
|
|||
|
|
},
|
|||
|
|
);
|
|||
|
|
|
|||
|
|
emailWorker.on('failed', (job, err) => {
|
|||
|
|
logger.error({ jobId: job?.id, err }, 'Email job failed');
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Afternoon: Layout Shell Start (3 hours) — continues into Day 4
|
|||
|
|
|
|||
|
|
**Step 28: Install shadcn/ui + core components**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm dlx shadcn@latest init
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Configure: New York style, Tailwind CSS, navy theme.
|
|||
|
|
|
|||
|
|
Install all core components from `14-TECHNICAL-DECISIONS.md` Section 2.2:
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm dlx shadcn@latest add button input label select textarea checkbox radio-group switch dialog sheet dropdown-menu command tabs table card badge avatar tooltip popover calendar form skeleton separator scroll-area alert-dialog accordion breadcrumb navigation-menu pagination progress slider sonner
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 29: Tailwind config with design tokens**
|
|||
|
|
|
|||
|
|
**File:** `tailwind.config.ts`
|
|||
|
|
|
|||
|
|
Apply ALL tokens from `15-DESIGN-TOKENS.md`:
|
|||
|
|
|
|||
|
|
- Brand colors with full tint ladders (`navy`, `brand`, `sage`, `mint`, `teal`, `purple`)
|
|||
|
|
- Semantic color variables via CSS custom properties
|
|||
|
|
- Shadows using `rgba(30, 40, 68, ...)` values
|
|||
|
|
- Typography: Inter (font-sans), Georgia (font-serif), JetBrains Mono (font-mono)
|
|||
|
|
- Border radius variable
|
|||
|
|
|
|||
|
|
**File:** `src/app/globals.css`
|
|||
|
|
|
|||
|
|
All CSS custom properties from `15-DESIGN-TOKENS.md` Section 5 — light mode defaults and dark mode overrides via `[data-theme="dark"]` or `.dark` class. Full HSL values for shadcn/ui compatibility.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Day 4 — Layout Shell (Part 2) + Security + Verification
|
|||
|
|
|
|||
|
|
#### Morning: Layout Components (3 hours)
|
|||
|
|
|
|||
|
|
**Step 30: Root layout + providers**
|
|||
|
|
|
|||
|
|
**File:** `src/app/layout.tsx`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import type { Metadata } from 'next';
|
|||
|
|
import { Inter, JetBrains_Mono } from 'next/font/google';
|
|||
|
|
import { Toaster } from '@/components/ui/sonner';
|
|||
|
|
import './globals.css';
|
|||
|
|
|
|||
|
|
const inter = Inter({ subsets: ['latin'], variable: '--font-sans' });
|
|||
|
|
const jetbrainsMono = JetBrains_Mono({ subsets: ['latin'], variable: '--font-mono' });
|
|||
|
|
|
|||
|
|
export const metadata: Metadata = {
|
|||
|
|
title: 'Port Nimara CRM',
|
|||
|
|
description: 'Marina management system',
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
export default function RootLayout({ children }: { children: React.ReactNode }) {
|
|||
|
|
return (
|
|||
|
|
<html lang="en" suppressHydrationWarning>
|
|||
|
|
<body className={`${inter.variable} ${jetbrainsMono.variable} font-sans antialiased`}>
|
|||
|
|
{children}
|
|||
|
|
<Toaster richColors position="top-right" />
|
|||
|
|
</body>
|
|||
|
|
</html>
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/app/(dashboard)/layout.tsx`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { redirect } from 'next/navigation';
|
|||
|
|
import { headers } from 'next/headers';
|
|||
|
|
|
|||
|
|
import { auth } from '@/lib/auth';
|
|||
|
|
import { db } from '@/lib/db';
|
|||
|
|
import { QueryProvider } from '@/providers/query-provider';
|
|||
|
|
import { SocketProvider } from '@/providers/socket-provider';
|
|||
|
|
import { PortProvider } from '@/providers/port-provider';
|
|||
|
|
import { Sidebar } from '@/components/layout/sidebar';
|
|||
|
|
import { Topbar } from '@/components/layout/topbar';
|
|||
|
|
|
|||
|
|
export default async function DashboardLayout({ children }: { children: React.ReactNode }) {
|
|||
|
|
const session = await auth.api.getSession({ headers: await headers() });
|
|||
|
|
if (!session?.user) redirect('/login');
|
|||
|
|
|
|||
|
|
// Load user's port assignments for PortProvider
|
|||
|
|
const portRoles = await db.query.userPortRoles.findMany({
|
|||
|
|
where: eq(userPortRoles.userId, session.user.id),
|
|||
|
|
with: { port: true, role: true },
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
return (
|
|||
|
|
<QueryProvider>
|
|||
|
|
<PortProvider ports={portRoles.map(pr => pr.port)} defaultPortId={portRoles[0]?.port.id}>
|
|||
|
|
<SocketProvider>
|
|||
|
|
<div className="flex h-screen overflow-hidden">
|
|||
|
|
<Sidebar ports={portRoles} />
|
|||
|
|
<div className="flex-1 flex flex-col overflow-hidden">
|
|||
|
|
<Topbar />
|
|||
|
|
<main className="flex-1 overflow-y-auto bg-background p-6">
|
|||
|
|
{children}
|
|||
|
|
</main>
|
|||
|
|
</div>
|
|||
|
|
</div>
|
|||
|
|
</SocketProvider>
|
|||
|
|
</PortProvider>
|
|||
|
|
</QueryProvider>
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/components/layout/sidebar.tsx`
|
|||
|
|
|
|||
|
|
- Dark navy background (`bg-[#1e2844]`)
|
|||
|
|
- Logo area: "PN" mark + "Port Nimara" + "Marina CRM"
|
|||
|
|
- Navigation sections per `13-UI-PAGE-MAP.md`:
|
|||
|
|
- **Main:** Dashboard, Clients, Interests, Berths
|
|||
|
|
- **Financial:** Expenses, Invoices
|
|||
|
|
- **Operations:** Files, Email, Reminders
|
|||
|
|
- **Admin** (expandable, permission-gated): Users, Roles, Ports, Audit, Settings, etc.
|
|||
|
|
- Active state: `border-l-2 border-brand bg-sidebar-active text-white`
|
|||
|
|
- Hover state: `bg-[#171f35]`
|
|||
|
|
- Icons: Lucide React (`LayoutDashboard`, `Users`, `Bookmark`, `Anchor`, `Receipt`, `FileText`, `FolderOpen`, `Mail`, `Bell`, `Settings`, `Shield`)
|
|||
|
|
- Collapsible: persisted via Zustand `sidebarCollapsed`
|
|||
|
|
- Mobile: Sheet/drawer triggered by hamburger
|
|||
|
|
- User footer: avatar + name + role badge
|
|||
|
|
- shadcn components: `Sheet`, `ScrollArea`, `Tooltip`, `Badge`, `Avatar`, `Separator`
|
|||
|
|
|
|||
|
|
**File:** `src/components/layout/topbar.tsx`
|
|||
|
|
|
|||
|
|
- Page title (derived from route params)
|
|||
|
|
- Search box: `Button` with `⌘K` label → placeholder for Command palette (implemented in L3)
|
|||
|
|
- Notification bell: `Button` with `Badge` unread count dot → placeholder panel
|
|||
|
|
- "+ New" dropdown: `DropdownMenu` with context-aware options (New Client, New Interest, New Expense)
|
|||
|
|
- Port switcher (only if 2+ ports): `Select` dropdown with port names
|
|||
|
|
- User menu: `DropdownMenu` with Profile, Dark Mode toggle, Scratchpad, Logout
|
|||
|
|
- shadcn components: `Button`, `DropdownMenu`, `Badge`, `Avatar`, `Separator`
|
|||
|
|
|
|||
|
|
**File:** `src/components/layout/port-switcher.tsx`
|
|||
|
|
|
|||
|
|
- Hidden when user has only 1 port
|
|||
|
|
- `Select` dropdown showing port name
|
|||
|
|
- On change: updates Zustand `currentPortId` → invalidates all TanStack Query caches → Socket.io reconnects to new port room
|
|||
|
|
|
|||
|
|
**File:** `src/components/layout/breadcrumbs.tsx`
|
|||
|
|
|
|||
|
|
- Auto-generated from `usePathname()` route segments
|
|||
|
|
- Maps `[portSlug]` to port name
|
|||
|
|
- Clickable segments using shadcn `Breadcrumb` component
|
|||
|
|
|
|||
|
|
**Step 31: Zustand UI store**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add zustand
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/stores/ui-store.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { create } from 'zustand';
|
|||
|
|
import { persist } from 'zustand/middleware';
|
|||
|
|
|
|||
|
|
interface UIStore {
|
|||
|
|
sidebarCollapsed: boolean;
|
|||
|
|
currentPortId: string | null;
|
|||
|
|
currentPortSlug: string | null;
|
|||
|
|
darkMode: boolean;
|
|||
|
|
toggleSidebar: () => void;
|
|||
|
|
setPort: (portId: string, portSlug: string) => void;
|
|||
|
|
toggleDarkMode: () => void;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
export const useUIStore = create<UIStore>()(
|
|||
|
|
persist(
|
|||
|
|
(set) => ({
|
|||
|
|
sidebarCollapsed: false,
|
|||
|
|
currentPortId: null,
|
|||
|
|
currentPortSlug: null,
|
|||
|
|
darkMode: false,
|
|||
|
|
toggleSidebar: () => set((s) => ({ sidebarCollapsed: !s.sidebarCollapsed })),
|
|||
|
|
setPort: (portId, portSlug) => set({ currentPortId: portId, currentPortSlug: portSlug }),
|
|||
|
|
toggleDarkMode: () => set((s) => ({ darkMode: !s.darkMode })),
|
|||
|
|
}),
|
|||
|
|
{
|
|||
|
|
name: 'pn-crm-ui',
|
|||
|
|
partialize: (state) => ({
|
|||
|
|
sidebarCollapsed: state.sidebarCollapsed,
|
|||
|
|
currentPortId: state.currentPortId,
|
|||
|
|
currentPortSlug: state.currentPortSlug,
|
|||
|
|
darkMode: state.darkMode,
|
|||
|
|
}),
|
|||
|
|
},
|
|||
|
|
),
|
|||
|
|
);
|
|||
|
|
|
|||
|
|
// Alias for port-specific access
|
|||
|
|
export const usePortStore = useUIStore;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 32: TanStack Query provider**
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
pnpm add @tanstack/react-query @tanstack/react-query-devtools
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**File:** `src/providers/query-provider.tsx`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
'use client';
|
|||
|
|
|
|||
|
|
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
|
|||
|
|
import { ReactQueryDevtools } from '@tanstack/react-query-devtools';
|
|||
|
|
import { useState, type ReactNode } from 'react';
|
|||
|
|
|
|||
|
|
export function QueryProvider({ children }: { children: ReactNode }) {
|
|||
|
|
const [queryClient] = useState(
|
|||
|
|
() =>
|
|||
|
|
new QueryClient({
|
|||
|
|
defaultOptions: {
|
|||
|
|
queries: {
|
|||
|
|
staleTime: 30 * 1000,
|
|||
|
|
retry: 1,
|
|||
|
|
refetchOnWindowFocus: false,
|
|||
|
|
},
|
|||
|
|
mutations: {
|
|||
|
|
onError: (error) => {
|
|||
|
|
console.error('Mutation error:', error);
|
|||
|
|
},
|
|||
|
|
},
|
|||
|
|
},
|
|||
|
|
}),
|
|||
|
|
);
|
|||
|
|
|
|||
|
|
return (
|
|||
|
|
<QueryClientProvider client={queryClient}>
|
|||
|
|
{children}
|
|||
|
|
{process.env.NODE_ENV === 'development' && <ReactQueryDevtools initialIsOpen={false} />}
|
|||
|
|
</QueryClientProvider>
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 33: Base UI patterns**
|
|||
|
|
|
|||
|
|
**File:** `src/components/shared/page-header.tsx` — consistent page header with title, description, action buttons
|
|||
|
|
|
|||
|
|
**File:** `src/components/shared/empty-state.tsx` — "no data" pattern with icon, title, description, optional CTA
|
|||
|
|
|
|||
|
|
**File:** `src/components/shared/loading-skeleton.tsx` — reusable skeleton patterns (table, card, form)
|
|||
|
|
|
|||
|
|
**File:** `src/components/shared/confirmation-dialog.tsx` — destructive action confirmation using shadcn `AlertDialog`
|
|||
|
|
|
|||
|
|
**Step 34: Placeholder pages**
|
|||
|
|
|
|||
|
|
Create minimal placeholder pages for every route. Each renders `<PageHeader>` + `<EmptyState>` with "Coming in Layer X" message and correct breadcrumbs.
|
|||
|
|
|
|||
|
|
Pages under `src/app/(dashboard)/[portSlug]/`:
|
|||
|
|
|
|||
|
|
- `page.tsx` (dashboard)
|
|||
|
|
- `clients/page.tsx`
|
|||
|
|
- `interests/page.tsx`
|
|||
|
|
- `berths/page.tsx`
|
|||
|
|
- `expenses/page.tsx`
|
|||
|
|
- `invoices/page.tsx`
|
|||
|
|
- `documents/page.tsx`
|
|||
|
|
- `email/page.tsx`
|
|||
|
|
- `reminders/page.tsx`
|
|||
|
|
- `reports/page.tsx`
|
|||
|
|
- `settings/profile/page.tsx`
|
|||
|
|
- `settings/notifications/page.tsx`
|
|||
|
|
- `settings/calendar/page.tsx`
|
|||
|
|
- `admin/page.tsx`
|
|||
|
|
- `admin/users/page.tsx`
|
|||
|
|
- `admin/roles/page.tsx`
|
|||
|
|
- `admin/ports/page.tsx`
|
|||
|
|
- `admin/audit/page.tsx`
|
|||
|
|
- `admin/settings/page.tsx`
|
|||
|
|
- `admin/webhooks/page.tsx`
|
|||
|
|
- `admin/reports/page.tsx`
|
|||
|
|
- `admin/templates/page.tsx`
|
|||
|
|
- `admin/forms/page.tsx`
|
|||
|
|
- `admin/tags/page.tsx`
|
|||
|
|
- `admin/import/page.tsx`
|
|||
|
|
- `admin/monitoring/page.tsx`
|
|||
|
|
- `admin/backup/page.tsx`
|
|||
|
|
- `admin/custom-fields/page.tsx`
|
|||
|
|
- `admin/onboarding/page.tsx`
|
|||
|
|
|
|||
|
|
#### Afternoon: Security Baseline + Nginx + Verification (3 hours)
|
|||
|
|
|
|||
|
|
**Step 35: Nginx configuration**
|
|||
|
|
|
|||
|
|
**File:** `nginx/nginx.conf`
|
|||
|
|
|
|||
|
|
Full configuration matching `SECURITY-GUIDELINES.md` Section 6:
|
|||
|
|
|
|||
|
|
- TLS 1.3 only
|
|||
|
|
- All security headers (HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, CSP, Permissions-Policy)
|
|||
|
|
- Rate limiting zones: `auth` (5r/m), `general` (30r/s), `api` (60r/s), `upload` (10r/m)
|
|||
|
|
- Proxy to `crm-app:3000`
|
|||
|
|
- WebSocket upgrade for `/socket.io/`
|
|||
|
|
- CORS for `/api/public/` (allow `portnimara.com` only)
|
|||
|
|
- HTTP → HTTPS redirect
|
|||
|
|
- Request body size limits (1MB JSON, 50MB uploads)
|
|||
|
|
|
|||
|
|
**Step 36: Constants file**
|
|||
|
|
|
|||
|
|
**File:** `src/lib/constants.ts`
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
// Pipeline stages (from 09-BUSINESS-RULES.md)
|
|||
|
|
export const PIPELINE_STAGES = [
|
|||
|
|
'open',
|
|||
|
|
'details_sent',
|
|||
|
|
'in_communication',
|
|||
|
|
'visited',
|
|||
|
|
'signed_eoi_nda',
|
|||
|
|
'ten_percent_deposit',
|
|||
|
|
'contract',
|
|||
|
|
'completed',
|
|||
|
|
] as const;
|
|||
|
|
|
|||
|
|
export type PipelineStage = (typeof PIPELINE_STAGES)[number];
|
|||
|
|
|
|||
|
|
// Berth statuses
|
|||
|
|
export const BERTH_STATUSES = [
|
|||
|
|
'available',
|
|||
|
|
'under_offer',
|
|||
|
|
'sold',
|
|||
|
|
'maintenance',
|
|||
|
|
'reserved',
|
|||
|
|
] as const;
|
|||
|
|
export type BerthStatus = (typeof BERTH_STATUSES)[number];
|
|||
|
|
|
|||
|
|
// Lead categories
|
|||
|
|
export const LEAD_CATEGORIES = ['hot', 'warm', 'cold', 'dormant'] as const;
|
|||
|
|
export type LeadCategory = (typeof LEAD_CATEGORIES)[number];
|
|||
|
|
|
|||
|
|
// Invoice statuses
|
|||
|
|
export const INVOICE_STATUSES = ['draft', 'sent', 'paid', 'overdue', 'cancelled'] as const;
|
|||
|
|
export type InvoiceStatus = (typeof INVOICE_STATUSES)[number];
|
|||
|
|
|
|||
|
|
// Expense categories
|
|||
|
|
export const EXPENSE_CATEGORIES = [
|
|||
|
|
'food_beverage',
|
|||
|
|
'transportation',
|
|||
|
|
'accommodation',
|
|||
|
|
'supplies',
|
|||
|
|
'maintenance',
|
|||
|
|
'utilities',
|
|||
|
|
'professional_services',
|
|||
|
|
'entertainment',
|
|||
|
|
'marketing',
|
|||
|
|
'office',
|
|||
|
|
'other',
|
|||
|
|
] as const;
|
|||
|
|
export type ExpenseCategory = (typeof EXPENSE_CATEGORIES)[number];
|
|||
|
|
|
|||
|
|
// Document types
|
|||
|
|
export const DOCUMENT_TYPES = ['eoi', 'nda', 'contract', 'addendum', 'other'] as const;
|
|||
|
|
export type DocumentType = (typeof DOCUMENT_TYPES)[number];
|
|||
|
|
|
|||
|
|
// Reminder priorities
|
|||
|
|
export const REMINDER_PRIORITIES = ['low', 'normal', 'high', 'urgent'] as const;
|
|||
|
|
export type ReminderPriority = (typeof REMINDER_PRIORITIES)[number];
|
|||
|
|
|
|||
|
|
// Client sources
|
|||
|
|
export const CLIENT_SOURCES = [
|
|||
|
|
'website',
|
|||
|
|
'manual',
|
|||
|
|
'referral',
|
|||
|
|
'broker',
|
|||
|
|
'event',
|
|||
|
|
'other',
|
|||
|
|
] as const;
|
|||
|
|
export type ClientSource = (typeof CLIENT_SOURCES)[number];
|
|||
|
|
|
|||
|
|
// Notification types
|
|||
|
|
export const NOTIFICATION_TYPES = [
|
|||
|
|
'reminder_due',
|
|||
|
|
'reminder_overdue',
|
|||
|
|
'new_registration',
|
|||
|
|
'eoi_signature_event',
|
|||
|
|
'new_email',
|
|||
|
|
'duplicate_alert',
|
|||
|
|
'invoice_overdue',
|
|||
|
|
'waiting_list',
|
|||
|
|
'system_alert',
|
|||
|
|
'follow_up_created',
|
|||
|
|
'tenure_expiring',
|
|||
|
|
] as const;
|
|||
|
|
export type NotificationType = (typeof NOTIFICATION_TYPES)[number];
|
|||
|
|
|
|||
|
|
// File MIME type allowlist (SECURITY-GUIDELINES.md Section 10)
|
|||
|
|
export const ALLOWED_MIME_TYPES = new Set([
|
|||
|
|
'image/jpeg',
|
|||
|
|
'image/png',
|
|||
|
|
'image/gif',
|
|||
|
|
'image/webp',
|
|||
|
|
'application/pdf',
|
|||
|
|
'application/msword',
|
|||
|
|
'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
|
|||
|
|
'application/vnd.ms-excel',
|
|||
|
|
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
|
|||
|
|
]);
|
|||
|
|
|
|||
|
|
export const MAX_FILE_SIZE = 50 * 1024 * 1024; // 50MB
|
|||
|
|
export const MAX_JSON_BODY_SIZE = 1 * 1024 * 1024; // 1MB
|
|||
|
|
export const PRESIGNED_URL_EXPIRY = 900; // 15 minutes
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 37: Custom server entry point**
|
|||
|
|
|
|||
|
|
**File:** `src/server.ts`
|
|||
|
|
|
|||
|
|
This is the custom entry point that boots Next.js + Socket.io together in development. In production, the standalone Next.js `server.js` handles HTTP, and `crm-worker` handles BullMQ separately.
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { createServer } from 'node:http';
|
|||
|
|
import next from 'next';
|
|||
|
|
|
|||
|
|
import { initSocketServer } from '@/lib/socket/server';
|
|||
|
|
import { registerRecurringJobs } from '@/lib/queue/scheduler';
|
|||
|
|
import { ensureBucket } from '@/lib/minio';
|
|||
|
|
import { logger } from '@/lib/logger';
|
|||
|
|
|
|||
|
|
const dev = process.env.NODE_ENV !== 'production';
|
|||
|
|
const hostname = 'localhost';
|
|||
|
|
const port = parseInt(process.env.PORT || '3000', 10);
|
|||
|
|
|
|||
|
|
async function main() {
|
|||
|
|
const app = next({ dev, hostname, port });
|
|||
|
|
const handle = app.getRequestHandler();
|
|||
|
|
|
|||
|
|
await app.prepare();
|
|||
|
|
|
|||
|
|
const httpServer = createServer(handle);
|
|||
|
|
|
|||
|
|
// Initialize Socket.io on the same HTTP server
|
|||
|
|
initSocketServer(httpServer);
|
|||
|
|
logger.info('Socket.io initialized');
|
|||
|
|
|
|||
|
|
// Ensure MinIO bucket exists
|
|||
|
|
await ensureBucket();
|
|||
|
|
logger.info('MinIO bucket verified');
|
|||
|
|
|
|||
|
|
// Register recurring BullMQ jobs (dev only — production uses crm-worker)
|
|||
|
|
if (dev) {
|
|||
|
|
await registerRecurringJobs();
|
|||
|
|
// Import workers to start processing
|
|||
|
|
await import('@/lib/queue/workers/email');
|
|||
|
|
await import('@/lib/queue/workers/documents');
|
|||
|
|
await import('@/lib/queue/workers/notifications');
|
|||
|
|
await import('@/lib/queue/workers/maintenance');
|
|||
|
|
// ... other workers
|
|||
|
|
logger.info('BullMQ workers started (dev mode)');
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
httpServer.listen(port, () => {
|
|||
|
|
logger.info({ port, dev }, `Server ready at http://${hostname}:${port}`);
|
|||
|
|
});
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
main().catch((err) => {
|
|||
|
|
logger.fatal(err, 'Failed to start server');
|
|||
|
|
process.exit(1);
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Step 38: Final verification**
|
|||
|
|
|
|||
|
|
Run the full verification checklist:
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
# Type check
|
|||
|
|
pnpm tsc --noEmit
|
|||
|
|
|
|||
|
|
# Lint
|
|||
|
|
pnpm eslint src/
|
|||
|
|
|
|||
|
|
# Docker
|
|||
|
|
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build
|
|||
|
|
|
|||
|
|
# Verify DB tables
|
|||
|
|
pnpm drizzle-kit studio # Visual inspection of all 49 tables
|
|||
|
|
|
|||
|
|
# Seed
|
|||
|
|
pnpm tsx src/lib/db/seed.ts
|
|||
|
|
|
|||
|
|
# Start dev server
|
|||
|
|
pnpm tsx src/server.ts
|
|||
|
|
|
|||
|
|
# Verify:
|
|||
|
|
# - Login page renders at /login
|
|||
|
|
# - Can log in with seeded admin
|
|||
|
|
# - Dashboard layout renders with sidebar
|
|||
|
|
# - All placeholder pages accessible
|
|||
|
|
# - Socket.io connects (check browser console)
|
|||
|
|
# - BullMQ queues initialized (check server logs)
|
|||
|
|
# - Health endpoint returns 200 at /api/health
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 3. Code-Ready Details
|
|||
|
|
|
|||
|
|
### API Route Middleware Chain
|
|||
|
|
|
|||
|
|
Every API route under `/api/v1/` follows this pattern:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Request → Next.js middleware (session cookie check)
|
|||
|
|
→ withAuth() (load user, port, permissions)
|
|||
|
|
→ withPermission(resource, action) (RBAC check)
|
|||
|
|
→ rate limit check (Redis sliding window)
|
|||
|
|
→ Zod input validation
|
|||
|
|
→ handler (business logic)
|
|||
|
|
→ audit log write
|
|||
|
|
→ response
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### State Management Per Page
|
|||
|
|
|
|||
|
|
| Page | Server State (TanStack Query) | Client State (Zustand) | URL State |
|
|||
|
|
| -------------- | ------------------------------- | ------------------------------ | ------------------------ |
|
|||
|
|
| Dashboard | `['dashboard', portId]` | `sidebarCollapsed` | none |
|
|||
|
|
| Client List | `['clients', portId, filters]` | `sidebarCollapsed` | `?search=&source=&page=` |
|
|||
|
|
| Client Detail | `['clients', portId, clientId]` | `activeTab` | `?tab=overview` |
|
|||
|
|
| All list pages | `[entity, portId, filters]` | `sidebarCollapsed`, `viewMode` | `?page=&sort=&filter=` |
|
|||
|
|
|
|||
|
|
### shadcn Components Used in L0
|
|||
|
|
|
|||
|
|
| Component | Where Used |
|
|||
|
|
| ----------------- | ----------------------------------------------------- |
|
|||
|
|
| `Button` | Login form, sidebar nav, topbar actions, page headers |
|
|||
|
|
| `Input`, `Label` | Login form, password forms |
|
|||
|
|
| `Card` | Login page, empty states |
|
|||
|
|
| `Avatar`, `Badge` | Sidebar user footer, topbar |
|
|||
|
|
| `DropdownMenu` | Topbar "+ New", user menu |
|
|||
|
|
| `Sheet` | Mobile sidebar drawer |
|
|||
|
|
| `ScrollArea` | Sidebar navigation |
|
|||
|
|
| `Separator` | Sidebar sections |
|
|||
|
|
| `Tooltip` | Collapsed sidebar icons |
|
|||
|
|
| `Breadcrumb` | All pages (auto-generated) |
|
|||
|
|
| `Skeleton` | Loading states |
|
|||
|
|
| `AlertDialog` | Confirmation dialogs |
|
|||
|
|
| `Sonner` (Toast) | Notifications, errors |
|
|||
|
|
| `Form` | Login, set-password, reset-password |
|
|||
|
|
|
|||
|
|
### CSS / Tailwind Patterns
|
|||
|
|
|
|||
|
|
- Sidebar: `w-64 bg-[#1e2844] text-sidebar-text` (expanded), `w-16` (collapsed)
|
|||
|
|
- Content area: `bg-background` (white light mode, `#131a2c` dark mode)
|
|||
|
|
- Page headers: `text-2xl font-semibold text-text-primary`
|
|||
|
|
- Cards: `bg-card rounded-lg border shadow-sm`
|
|||
|
|
- Status badges: `bg-success-bg text-success border-success-border` (and warning/error/info variants)
|
|||
|
|
- Focus rings: `ring-2 ring-brand ring-offset-2`
|
|||
|
|
- Transitions: sidebar collapse `transition-all duration-200`, page transitions via `motion-safe:animate-in`
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 4. Acceptance Criteria
|
|||
|
|
|
|||
|
|
1. `docker compose -f docker-compose.yml -f docker-compose.dev.yml up` starts postgres, redis, and crm-app successfully
|
|||
|
|
2. All 49 database tables exist with correct schemas, constraints, indexes, and `pgcrypto` extension
|
|||
|
|
3. Seed data inserted: 1 port, 5 system roles, 1 super admin user
|
|||
|
|
4. Login page renders at `/login` with Port Nimara branding (dark navy, PN logo)
|
|||
|
|
5. Can log in with super admin credentials → redirected to `/(dashboard)/port-nimara/`
|
|||
|
|
6. Session persists across page refreshes (httpOnly cookie, Redis-backed)
|
|||
|
|
7. Protected routes redirect to `/login` when not authenticated
|
|||
|
|
8. API routes return 401 JSON when not authenticated
|
|||
|
|
9. Sidebar renders all navigation sections with correct icons
|
|||
|
|
10. All ~29 placeholder pages load with correct breadcrumbs and "Coming in Layer X" empty states
|
|||
|
|
11. Port switcher hidden (single port mode) — visible only with 2+ ports
|
|||
|
|
12. Socket.io connects on login, auto-joins `port:{portId}` and `user:{userId}` rooms
|
|||
|
|
13. All 10 BullMQ queues initialized (visible in server logs)
|
|||
|
|
14. All 13 recurring jobs registered (visible in server logs)
|
|||
|
|
15. MinIO bucket exists and accessible (`/api/health` reports `minio: ok`)
|
|||
|
|
16. Health endpoint at `/api/health` returns `{ status: "healthy", checks: { postgres: "ok", redis: "ok", minio: "ok" } }`
|
|||
|
|
17. Structured logging active (pino JSON output in production, pretty-printed in dev)
|
|||
|
|
18. Nginx serves over HTTPS with all security headers (HSTS, CSP, X-Frame-Options, etc.)
|
|||
|
|
19. Application-level rate limiter functional (Redis sliding window)
|
|||
|
|
20. ESLint passes with zero errors, TypeScript strict mode passes with zero errors
|
|||
|
|
21. No secrets in source code — `.env.example` has placeholders only, pre-commit hook catches accidental commits
|
|||
|
|
22. Audit log middleware functional — login event written to `audit_logs` table
|
|||
|
|
23. Zod env validation fails fast on missing/invalid environment variables
|
|||
|
|
24. Dark mode toggle in user menu switches theme (CSS variables swap)
|
|||
|
|
25. Mobile responsive: sidebar collapses to hamburger drawer below 768px
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 5. Self-Review Checklist
|
|||
|
|
|
|||
|
|
Before calling Layer 0 done:
|
|||
|
|
|
|||
|
|
- [ ] **Security:** All env vars externalized, no hardcoded secrets, pgcrypto enabled, TLS 1.3 configured
|
|||
|
|
- [ ] **Auth:** Better Auth + Redis sessions, Argon2id hashing, 12-char minimum password, CSRF protection, rate-limited login
|
|||
|
|
- [ ] **Port scoping:** Every query helper includes `portId`, middleware extracts port from session context
|
|||
|
|
- [ ] **RBAC:** 5 system roles match `10-AUTH-AND-PERMISSIONS.md`, permission check helper works with JSON permission map
|
|||
|
|
- [ ] **Audit:** `createAuditLog()` called on login/logout, sensitive fields masked, append-only
|
|||
|
|
- [ ] **Error handling:** `AppError` hierarchy, Zod validation errors formatted as `{ error, details }`, no stack traces leak to client
|
|||
|
|
- [ ] **Rate limiting:** Both nginx (per-IP) and application (per-user Redis sliding window) active
|
|||
|
|
- [ ] **Type safety:** Zero `any` types, strict mode, all Drizzle schemas typed, all API responses typed
|
|||
|
|
- [ ] **Docker:** Multi-stage build, non-root user, health checks on all services, no exposed internal ports
|
|||
|
|
- [ ] **Logging:** Pino with redaction rules, no PII in logs, log levels configurable via env
|
|||
|
|
- [ ] **Real-time:** Socket.io connects, authenticates, joins rooms, idle timeout configured
|
|||
|
|
- [ ] **Background jobs:** All 10 queues created, all 13 recurring jobs registered, exponential backoff configured
|
|||
|
|
- [ ] **UI:** Sidebar matches mockup-A design, Inter font loaded, design tokens applied, responsive breakpoints working
|
|||
|
|
- [ ] **Developer experience:** ESLint + Prettier + lint-staged + husky, TypeScript strict, hot reload works, Drizzle Studio accessible
|
|||
|
|
- [ ] **Schema completeness:** All 49 tables from `07-DATABASE-SCHEMA.md` present with correct columns, types, constraints, indexes, and Drizzle relation definitions
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Estimated File Count
|
|||
|
|
|
|||
|
|
Layer 0 produces approximately **85–95 files**:
|
|||
|
|
|
|||
|
|
- Config files: ~10 (tsconfig, eslint, prettier, drizzle, next.config, docker, nginx, husky, lint-staged, .env.example)
|
|||
|
|
- Schema files: 12 (10 domain + relations + index)
|
|||
|
|
- Auth: 4 (server, client, permissions, api catch-all)
|
|||
|
|
- Infrastructure: 8 (redis, rate-limit, logger, errors, audit, minio, env, constants)
|
|||
|
|
- Socket.io: 4 (server, events, rooms, provider)
|
|||
|
|
- BullMQ: 13 (index, scheduler, 10 worker stubs, types)
|
|||
|
|
- Layout components: 5 (sidebar, topbar, port-switcher, breadcrumbs, dashboard layout)
|
|||
|
|
- Shared components: 4 (page-header, empty-state, loading-skeleton, confirmation-dialog)
|
|||
|
|
- Providers: 3 (query, socket, port)
|
|||
|
|
- Stores: 1 (ui-store)
|
|||
|
|
- Pages: ~32 (3 auth + 29 placeholder dashboard pages + not-found)
|
|||
|
|
- Server: 1 (custom server.ts)
|
|||
|
|
- Seed: 1
|
|||
|
|
- API routes: 2 (auth catch-all, health)
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Codex Addenda — Merged from Competing Plan Review
|
|||
|
|
|
|||
|
|
The following items are cherry-picked from the Codex competing plan and should be incorporated during implementation. They represent architectural patterns and corrections that complement the Claude Code base plan.
|
|||
|
|
|
|||
|
|
### 1. Execution Spine Framing
|
|||
|
|
|
|||
|
|
Frame L0 not as "setup work" but as building the **execution spine** that all subsequent layers hang from. The reusable contracts produced here — `defineRoute`, `RequestContext`, `requirePermission`, `auditLog`, `buildStorageKey`, queue registry, and socket emitter — are the multiplier for 250+ endpoints. Pages stay minimal until those contracts exist.
|
|||
|
|
|
|||
|
|
### 2. Table Count Correction
|
|||
|
|
|
|||
|
|
The locked schema defines **51 application tables** plus Better Auth-managed auth tables. Plans referencing "49 tables" should be corrected. The coding agent will drift if the count is wrong.
|
|||
|
|
|
|||
|
|
### 3. Session Storage Resolution
|
|||
|
|
|
|||
|
|
Standardize on **PostgreSQL as the Better Auth session source of truth**. Redis is for cache, rate limits, queues, and Socket.io only. This resolves the contradiction across foundation docs.
|
|||
|
|
|
|||
|
|
### 4. Health Response Schema
|
|||
|
|
|
|||
|
|
Add a typed health check response schema:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
export const healthResponseSchema = z.object({
|
|||
|
|
status: z.enum(['healthy', 'degraded', 'down']),
|
|||
|
|
services: z.array(
|
|||
|
|
z.object({
|
|||
|
|
name: z.string(),
|
|||
|
|
status: z.enum(['healthy', 'degraded', 'down']),
|
|||
|
|
latencyMs: z.number().nullable(),
|
|||
|
|
detail: z.string().optional(),
|
|||
|
|
}),
|
|||
|
|
),
|
|||
|
|
});
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 5. Storage Key Builder Signature
|
|||
|
|
|
|||
|
|
Adopt Codex's explicit entity enum in `buildStorageKey`:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
export function buildStorageKey(input: {
|
|||
|
|
portSlug: string;
|
|||
|
|
entity: 'clients' | 'expenses' | 'invoices' | 'documents' | 'general';
|
|||
|
|
entityId: string;
|
|||
|
|
extension: string;
|
|||
|
|
}): string;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Storage keys use UUID filenames only: `{portSlug}/{entity}/{entityId}/{uuid}.{ext}`.
|
|||
|
|
|
|||
|
|
### 6. Graceful Degradation Edge Cases
|
|||
|
|
|
|||
|
|
- **Redis unavailable on boot:** App still starts, but queues, rate limiting, and sockets are marked degraded in health check.
|
|||
|
|
- **MinIO unavailable on boot:** File-dependent routes are disabled via startup status, not silent failures.
|
|||
|
|
- **Duplicate socket connections:** Beyond 10 per user are rejected.
|
|||
|
|
- **Presigned URLs:** Expire after 15 minutes and are never cached client-side in persisted state.
|
|||
|
|
- **User has no assigned ports:** Redirect to a blocked access page, not the dashboard.
|
|||
|
|
- **One active port only:** Port switcher component never mounts.
|
|||
|
|
|
|||
|
|
### 7. Security: Queue Job Payloads
|
|||
|
|
|
|||
|
|
Never put secrets or raw credentials into BullMQ job payloads — enqueue record IDs only.
|
|||
|
|
|
|||
|
|
### 8. Docker Acceptance Criteria
|
|||
|
|
|
|||
|
|
`docker compose up` must start `crm-app`, `postgres`, `redis`, and `nginx` without manual patching. This should be verified as part of L0 acceptance.
|