fix(audit): non-Documenso backlog sweep — port-binding, NULLS NOT DISTINCT, custom merge tokens, company docs
Some checks failed
Build & Push Docker Images / lint (push) Successful in 1m36s
Build & Push Docker Images / build-and-push (push) Failing after 4m27s

Wave through the remaining audit-final-deferred items that aren't blocked
on the back-burnered Documenso work.

Multi-tenant isolation:
- Storage proxy ProxyTokenPayload gains optional `p` (port slug) claim;
  verifier asserts `key.startsWith(${p}/)`. Defense-in-depth against a
  buggy issuer in some future code path that mixes port scopes — every
  storage key generated by generateStorageKey() already prefixes the
  slug. document-sends opts in for 24h emailed download links; other
  callers continue working unchanged via the optional field.

DB schema reconciliation:
- Migration 0047 rebuilds system_settings unique index with NULLS NOT
  DISTINCT (Postgres 15+) so global settings (port_id IS NULL) are
  uniquely keyed by `key` alone. Surfaced + dedupe'd 65 duplicate
  (storage_backend, NULL) rows that had accumulated from race-prone
  delete-then-insert patterns in ocr-config / settings / residential-
  stages / ai-budget services. All four services converted to true
  onConflictDoUpdate upserts so the race window is closed.

API uniformity:
- Response shape standardization: 16 routes converted from
  `{ success: true }` to 204 No Content. CLAUDE.md documents the
  convention (`{ data: <T> }` for content, 204 for empty mutations,
  portal-auth retains `{ success: true }` for the frontend's auth chain).
- req.json() → parseBody() migration across 9 admin/CRM routes
  (custom-fields, expenses/export ×3, currency convert,
  search/recently-viewed, admin/duplicates, berths/pdf-{upload-url,
  versions, parse-results}). Uniform 400 error shapes for
  ZodError-flagged bodies.

Custom-fields merge tokens (shipped end-to-end):
- merge-fields.ts gains CUSTOM_MERGE_TOKEN_RE + helpers for the
  `{{custom.<fieldName>}}` shape.
- document-templates validator accepts the dynamic shape alongside
  the static catalog tokens.
- document-sends.service mergeCustomFieldValues resolver fetches
  per-port custom_field_definitions for client/interest/berth contexts
  and substitutes stored values keyed by `{{custom.fieldName}}`.
- custom-fields-manager amber banner updated to reflect that merge
  tokens now expand (search index + entity-diff remain documented
  design limitations).

/api/v1/files cross-entity filtering:
- Validator + listFiles + uploadFile accept companyId AND yachtId
  alongside clientId. file-upload-zone propagates both.
- New CompanyFilesTab component mirrors ClientFilesTab; restored as a
  visible Documents tab in company-tabs.tsx (was a hidden stub).

Inline TODOs:
- Reviewed remaining two TODOs (per-user reminder schedule, import
  worker handlers). Both are placeholders for future feature surfaces,
  not bugs — per-port digest works for every customer; nothing
  currently enqueues import jobs (verified). Annotated in BACKLOG.

BACKLOG.md updated to reflect what landed and what's still pending
(Documenso-related items still bundled with the back-burnered phases).

Tests: 1185/1185 vitest, tsc clean.
This commit is contained in:
2026-05-08 02:20:27 +02:00
parent 60365dc3de
commit 8dc16dcd2e
49 changed files with 578 additions and 254 deletions

View File

@@ -106,6 +106,8 @@ src/
- **Send-from accounts (sales send-outs):** Configurable via `system_settings`; defaults to `sales@portnimara.com` for human-touch and `noreply@portnimara.com` for automation. SMTP/IMAP passwords are AES-256-GCM encrypted at rest; the API never returns decrypted secrets — only `*PassIsSet` boolean markers. Send-out audit goes to `document_sends` (separate from `audit_logs` because of volume + binary refs). Body markdown is XSS-safe via `renderEmailBody()` (escape-then-allowlist; tested against the standard XSS vector list). Rate limit: 50 sends/user/hour individual. Pre-send size threshold: files > `email_attach_threshold_mb` ship as a 24h signed-URL link rather than an attachment (avoids the duplicate-send race from async bounces). The download-link fallback HTML-escapes the filename to prevent injection from admin-supplied brochure names. Bounce monitoring requires IMAP credentials in addition to SMTP — without them, the size-rejection banner stays disabled.
- **NocoDB berth import:** `pnpm tsx scripts/import-berths-from-nocodb.ts --apply --port-slug port-nimara` re-imports from the legacy NocoDB Berths table. Idempotent: rows where `updated_at > last_imported_at` (the "human edited this since last import" guard) are skipped unless `--force`. Adds `--update-snapshot` to also rewrite `src/lib/db/seed-data/berths.json`. Uses `pg_advisory_xact_lock` so two simultaneous runs serialize. Pure helpers in `src/lib/services/berth-import.ts` are unit-tested.
- **Routes:** Multi-tenant via `[portSlug]` dynamic segment. Typed routes enabled.
- **API response shapes:** Conventional envelope is `{ data: <T> }` for any endpoint that returns content (read OR write). Mutations that return nothing emit `204 No Content` (`new NextResponse(null, { status: 204 })`). Don't use `{ success: true }` for CRM mutations — it was a legacy pattern, normalized away in 2026-05-07. Public portal-auth endpoints are an exception: they return `{ success: true }` because the frontend needs a non-error JSON body to chain on. List/paginated reads return `{ data: <T[]>, total?, hasMore? }` (see `/api/v1/clients` for the shape). Errors always go through `errorResponse(error)` from `@/lib/errors` so request-id propagation and the audit-tier mapping stay uniform.
- **Body parsing:** Always use `parseBody(req, schema)` from `@/lib/api/route-helpers` instead of `await req.json(); schema.parse(body)`. The helper returns a uniform 400 with field-level errors that the frontend's `toastError` hook recognizes; raw `req.json` + `schema.parse` produces a generic 500 because the ZodError isn't caught in the same shape.
- **Pre-commit:** Husky + lint-staged runs ESLint fix + Prettier on staged `.ts`/`.tsx` files. The hook also blocks `.env*` files (including `.env.example`) from being committed; pass them via a separate workflow if needed.
## Schema migrations during dev

View File

@@ -4,13 +4,12 @@
asking "what's left to build/fix?". Items are grouped by source doc;
each entry links back to the original spec for full context.
Last updated: 2026-05-07 (after the audit-final-deferred sweep — partial
archived indexes, document_sends interestId port-verify, custom-fields
per-entity permission gate, recommender bool parsing, expense PDF cursor
math, berth PDF silent-drop logging, YachtForm preset-owner + interest
form member-company yacht filter + add-new shortcut, invoice detail
typed). Many older items in §C and §F were already resolved by earlier
fix-audit commit waves; the audit doc was stale.
Last updated: 2026-05-08 (second non-Documenso sweep — storage-proxy
port-binding, system_settings NULLS NOT DISTINCT + dedup migration,
response-shape standardization, parseBody migration, custom-field merge
tokens, /api/v1/files companyId+yachtId filter, Company Documents tab,
file-upload zone wired for company/yacht targeting). Documenso phases
2-7 stay back-burnered per user.
---
@@ -35,15 +34,14 @@ Remaining phases — explicitly back-burnered by the user on 2026-05-07:
---
## B. Custom-fields hardening (~ongoing, deferred)
## B. Custom-fields hardening
**Source:** [`docs/admin-ux-backlog.md`](./admin-ux-backlog.md) §7.
Custom Settings page already shows the amber warning banner. Remediation work:
- **Search index** — extend the GIN tsvector to include `customFieldValues` content
- **Audit diff** — extend `diffEntity` to walk the `customFieldValues` blob
- **Merge tokens** — add `{{custom.<fieldName>}}` handling at template-render time, plus surface them in the merge-tokens UI
-**Merge tokens**`{{custom.<fieldName>}}` validators + resolver shipped 2026-05-08. Tokens expand at template-render time for client/interest/berth contexts via `mergeCustomFieldValues` in `document-sends.service.ts`. Banner updated.
- **Search index** — DEFERRED as design limitation. Adding GIN coverage requires either joining `custom_field_values` per search (slow at scale) or materializing values into a search_text column on the parent (additive maintenance burden). The amber banner documents this.
- **Audit diff** — N/A. Custom-field values live in their own table, not as a JSONB blob on the parent entity. The `setValues()` service-layer call already creates its own audit log entry (custom-fields.service.ts:349-358), so changes ARE audited — just separately from the entity-diff.
- **UI surfacing of `{{custom.…}}` tokens in template-edit pickers** — Open. The token list dialog currently only shows static catalog tokens. Surface per-port custom-field definitions as a dynamic group under "Custom" so reps can browse them. Backend already accepts the tokens; this is a UI follow-up.
---
@@ -55,15 +53,26 @@ The 2026-05-07 backlog sweep landed every small/concrete item. Remaining
entries are deferred because they need design decisions, live external
instances, or cross-cutting refactors:
### Deferred — needs design or larger refactor
### Deferred — Documenso-related (back-burnered until phases 2-7 land)
- **Storage proxy token does not bind to port_id** — `src/lib/storage/filesystem.ts:73-84`. Adding a `p` (portId) claim is mechanical; the meaningful security gain requires the proxy verifier to look up the file's owning row + assert `owner.portId === payload.p`. That requires either a routing prefix in the key (currently `${portSlug}/...` already, so a prefix check is plausible) or a per-table lookup across all owners. Decide which approach before implementing — current state ships with `validateStorageKey` + per-issuer port scoping, so this is defense-in-depth rather than an open hole.
- **Documenso webhook does not enforce port_id on document lookups** — `src/app/api/webhooks/documenso/route.ts:96-148`. Adding port scope requires either including the originating Documenso instance/team id in the lookup (Documenso doesn't surface that on the webhook payload today) OR proving `documents(documenso_id)` is globally unique with a DB constraint and a backfill check. Pick the strategy with the audit doc open.
- **Webhook dedup vs per-recipient signed events** — `src/app/api/webhooks/documenso/route.ts:103-110`. Replacing the body-hash dedup with a `(documensoDocumentId, recipientEmail, eventType)` composite unique requires schema column for recipient_email on `documentEvents`. Right place to do this is alongside Documenso Phase 2 (webhook handler enhancement) since they touch the same code.
- **v2 voidDocument endpoint shape verification** — `src/lib/services/documenso-client.ts:450-466`. Needs a live Documenso 2.x instance to confirm `POST /api/v2/envelope/delete` body shape. Bundle with Documenso Phase 5.
- **Public POST routes bypass service layer** — `src/app/api/public/{interests,website-inquiries,residential-inquiries}/route.ts`. Multi-route refactor extracting a shared `publicInterestService.create(...)`. Worth doing but big enough to deserve its own session.
- **Inconsistent response shapes** — most endpoints return `{ data: ... }`, but `notifications/[notificationId]` returns `{ success: true }`, `website-inquiries` returns `{ id, deduped }`. Codebase-wide migration; document a convention in CLAUDE.md first.
- **`systemSettings` PK / unique-index drift** — `src/lib/db/schema/system.ts:119-133`. Schema declares `uniqueIndex` on `(key, port_id)`, migration uses `key` as PK. `port_id` is nullable so `(key, port_id)` cannot serve as a PK with default NULLs-not-equal semantics. Reconcile by either making `portId` non-null with a sentinel ("**global**") and declaring composite PK, OR by dropping the schema-level unique index and using partial unique indexes for global vs per-port. Either path is a data migration.
- **Documenso webhook does not enforce port_id on document lookups** — `src/app/api/webhooks/documenso/route.ts:96-148`. Bundle with Documenso Phase 2 (webhook handler enhancement) since they touch the same code.
- **Webhook dedup vs per-recipient signed events** — `src/app/api/webhooks/documenso/route.ts:103-110`. Replacing the body-hash dedup with a `(documensoDocumentId, recipientEmail, eventType)` composite unique requires a recipient_email column on `documentEvents`. Bundle with Phase 2.
- **v2 voidDocument endpoint shape verification** — `src/lib/services/documenso-client.ts:450-466`. Needs a live Documenso 2.x instance. Bundle with Phase 5.
### Deferred — pure refactor (no active bug)
- **Public POST routes bypass service layer** — `src/app/api/public/{interests,website-inquiries,residential-inquiries}/route.ts`. The audit's `userId: null as unknown as string` cast was already cleaned up to a proper `userId: null`. Remaining concern is testability: extract a shared `publicInterestService.create(...)`. Pure ergonomics — no active bug or security issue.
### Done in 2026-05-08 sweep (latest)
- ✅ Storage proxy port_id binding: `ProxyTokenPayload` gains optional `p` (port slug) claim; verifier asserts `key.startsWith(${p}/)`. document-sends 24h URLs opt in; other issuers continue working unchanged.
- ✅ system_settings index rebuilt with `NULLS NOT DISTINCT` (migration 0047) — global settings are now uniquely keyed by `key` alone. Surfaced + cleaned 65 duplicate `(storage_backend, NULL)` rows that had accumulated from race-prone delete-then-insert patterns.
- ✅ All 4 read-then-write systemSettings sites converted to true `onConflictDoUpdate` upserts (ocr-config, settings, residential-stages, ai-budget).
- ✅ Response shape standardization: 16 routes converted from `{ success: true }``204 No Content`. CLAUDE.md documents the convention.
-`req.json()``parseBody()` migration across 9 admin/CRM routes (custom-fields, expenses/export ×3, currency convert, search/recently-viewed, admin/duplicates, berths/pdf-{upload-url,versions,parse-results}). Portal-auth routes intentionally retained `{ success: true }`.
- ✅ Custom-field merge tokens: validator accepts `{{custom.<fieldName>}}` shape; resolver in `mergeCustomFieldValues` substitutes from per-port custom_field_definitions + per-entity values for client/interest/berth contexts. Banner updated.
-`/api/v1/files` accepts `companyId` and `yachtId` filters. uploadFile service writes both. file-upload-zone component accepts both props.
- ✅ Company Documents tab (CompanyFilesTab) re-enabled and added to company detail tabs.
### Done in 2026-05-07 sweep (commits in this session)
@@ -91,28 +100,29 @@ instances, or cross-cutting refactors:
- ✅ All FK indexes called out in audit doc (already in place — audit was stale)
-`documentSends.sentByUserId` FK (already had `.references(...)`)
### Still open — small enough to bundle next time
### Documented limitations (no action planned)
- **`berths.current_pdf_version_id` lacks Drizzle FK** — `src/lib/db/schema/berths.ts:83`. The in-line comment fully documents why (circular FK between `berths``berth_pdf_versions` makes column-level `.references()` infeasible). FK is enforced via migration 0030. Treat as documented limitation; revisit if Drizzle adds deferred-FK support.
- **`req.json()` without `parseBody` helper** — admin custom-fields routes use `await req.json(); schema.parse(body)` directly. Migrate for uniform 400 error shapes when the surface area calms down.
- **`berths.current_pdf_version_id` lacks Drizzle FK** — `src/lib/db/schema/berths.ts:83`. The in-line comment fully documents why (circular FK between `berths``berth_pdf_versions` makes column-level `.references()` infeasible). FK is enforced via migration 0030. Revisit if Drizzle adds deferred-FK support.
- **`systemSettings` schema declares `uniqueIndex` instead of `NULLS NOT DISTINCT`** — Drizzle's `uniqueIndex` builder doesn't surface the flag. Migration 0047 is the source of truth; `db:push` against an empty DB would skip the flag. Same documented-limitation pattern as `berths.current_pdf_version_id`.
- **One remaining `req.json()` in admin/custom-fields/[fieldId]** — intentional. The handler inspects raw body to detect `fieldType` mutation attempts; parseBody would lose the raw view. Documented inline.
---
## D. Inline TODOs in code (2 remaining)
| File:line | Note | Status |
| ------------------------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------ |
| ~~`client-yachts-tab.tsx:93`~~ | YachtForm preset owner prop | ✅ landed 2026-05-07 (`initialOwner` prop) |
| ~~`interest-form.tsx:329`~~ | Include company-owned yachts where client is a member | ✅ landed 2026-05-07 (`yachtOwnerFilter` array filter) |
| ~~`interest-form.tsx:330`~~ | "Add new yacht" inline shortcut | ✅ landed 2026-05-07 (Plus button + YachtForm sheet) |
| [`src/lib/queue/scheduler.ts:44`](../src/lib/queue/scheduler.ts#L44) | Per-user reminder schedule configurable from `user_settings` | Open — needs `user_settings` UI surface |
| [`src/lib/queue/workers/import.ts:13`](../src/lib/queue/workers/import.ts#L13) | Import job handlersworker is a stub | Open — entire feature surface |
| File:line | Note | Status |
| ------------------------------------------------------------------------------ | --------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| ~~`client-yachts-tab.tsx:93`~~ | YachtForm preset owner prop | ✅ landed 2026-05-07 (`initialOwner` prop) |
| ~~`interest-form.tsx:329`~~ | Include company-owned yachts where client is a member | ✅ landed 2026-05-07 (`yachtOwnerFilter` array filter) |
| ~~`interest-form.tsx:330`~~ | "Add new yacht" inline shortcut | ✅ landed 2026-05-07 (Plus button + YachtForm sheet) |
| [`src/lib/queue/scheduler.ts:44`](../src/lib/queue/scheduler.ts#L44) | Per-user reminder schedule (override on top of per-port digest) | Placeholder — per-port digest works; revisit when a customer asks for per-user override |
| [`src/lib/queue/workers/import.ts:13`](../src/lib/queue/workers/import.ts#L13) | CSV/Excel import worker — entire feature surface | Placeholder — nothing currently enqueues `import` jobs (verified) |
---
## E. Hidden / stubbed UI tabs
- **Company Documents tab** — `src/components/companies/company-tabs.tsx:229`. Hidden until `/api/v1/files` accepts a `companyId` filter (schema supports it, validator doesn't).
- **Company Documents tab**landed 2026-05-08. `/api/v1/files` accepts `companyId`+`yachtId` filters; CompanyFilesTab + uploadZone wired through the storage abstraction.
- **Berth Waiting List + Maintenance Log tabs** — `src/components/berths/berth-tabs.tsx:346`. Removed entirely; revisit if/when product asks.
- **Interest Contract / Reservation tabs** — `src/components/interests/interest-{contract,reservation}-tab.tsx`. Render a "coming soon" friendly card; the real flow is gated on Documenso Phases 26.

View File

@@ -36,7 +36,7 @@ export const DELETE = withAuth(
try {
const id = params.id!;
await archiveBrochure(ctx.portId, id);
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -3,67 +3,58 @@ import { NextRequest, NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { errorResponse, NotFoundError } from '@/lib/errors';
import { updateFieldSchema } from '@/lib/validators/custom-fields';
import {
updateDefinition,
deleteDefinition,
} from '@/lib/services/custom-fields.service';
import { updateDefinition, deleteDefinition } from '@/lib/services/custom-fields.service';
export const PATCH = withAuth(
withPermission(
'admin',
'manage_custom_fields',
async (req: NextRequest, ctx, params) => {
try {
const { fieldId } = params;
if (!fieldId) throw new NotFoundError('Custom field');
withPermission('admin', 'manage_custom_fields', async (req: NextRequest, ctx, params) => {
try {
const { fieldId } = params;
if (!fieldId) throw new NotFoundError('Custom field');
const body = await req.json();
// Read raw body before parsing so we can inspect `fieldType`
// (the schema strips it; the service rejects any change). Using
// req.json() directly here is intentional — parseBody would lose
// the raw view we need for the mutation-attempt detection below.
const body = (await req.json()) as Record<string, unknown>;
const data = updateFieldSchema.parse(body);
// Parse only allowed fields; if fieldType sneaks in, the service will catch it
const data = updateFieldSchema.parse(body);
// Pass raw body too so service can detect fieldType mutation attempts
const updated = await updateDefinition(
ctx.portId,
fieldId,
ctx.userId,
{ ...data, ...(body.fieldType !== undefined && { fieldType: body.fieldType }) },
{
userId: ctx.userId,
portId: ctx.portId,
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
},
);
return NextResponse.json({ data: updated });
} catch (error) {
return errorResponse(error);
}
},
),
);
export const DELETE = withAuth(
withPermission(
'admin',
'manage_custom_fields',
async (_req: NextRequest, ctx, params) => {
try {
const { fieldId } = params;
if (!fieldId) throw new NotFoundError('Custom field');
const result = await deleteDefinition(ctx.portId, fieldId, ctx.userId, {
// Pass raw body too so service can detect fieldType mutation attempts
const updated = await updateDefinition(
ctx.portId,
fieldId,
ctx.userId,
{ ...data, ...(body.fieldType !== undefined && { fieldType: body.fieldType }) },
{
userId: ctx.userId,
portId: ctx.portId,
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
},
);
return NextResponse.json({ data: result });
} catch (error) {
return errorResponse(error);
}
},
),
return NextResponse.json({ data: updated });
} catch (error) {
return errorResponse(error);
}
}),
);
export const DELETE = withAuth(
withPermission('admin', 'manage_custom_fields', async (_req: NextRequest, ctx, params) => {
try {
const { fieldId } = params;
if (!fieldId) throw new NotFoundError('Custom field');
const result = await deleteDefinition(ctx.portId, fieldId, ctx.userId, {
userId: ctx.userId,
portId: ctx.portId,
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ data: result });
} catch (error) {
return errorResponse(error);
}
}),
);

View File

@@ -1,12 +1,10 @@
import { NextRequest, NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { createFieldSchema } from '@/lib/validators/custom-fields';
import {
listDefinitions,
createDefinition,
} from '@/lib/services/custom-fields.service';
import { listDefinitions, createDefinition } from '@/lib/services/custom-fields.service';
export const GET = withAuth(
withPermission('admin', 'manage_custom_fields', async (req: NextRequest, ctx) => {
@@ -25,8 +23,7 @@ export const GET = withAuth(
export const POST = withAuth(
withPermission('admin', 'manage_custom_fields', async (req: NextRequest, ctx) => {
try {
const body = await req.json();
const data = createFieldSchema.parse(body);
const data = await parseBody(req, createFieldSchema);
const definition = await createDefinition(ctx.portId, ctx.userId, data, {
userId: ctx.userId,

View File

@@ -1,7 +1,9 @@
import { NextResponse } from 'next/server';
import { NextRequest, NextResponse } from 'next/server';
import { z } from 'zod';
import { and, eq, inArray } from 'drizzle-orm';
import type { AuthContext } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { db } from '@/lib/db';
import { clients, clientMergeCandidates } from '@/lib/db/schema/clients';
import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors';
@@ -11,6 +13,11 @@ import {
type MergeFieldChoices,
} from '@/lib/services/client-merge.service';
const confirmMergeSchema = z.object({
winnerId: z.string().min(1),
fieldChoices: z.record(z.string(), z.string()).optional(),
});
/**
* GET /api/v1/admin/duplicates
*
@@ -70,19 +77,13 @@ export async function listHandler(_req: Request, ctx: AuthContext): Promise<Next
* service which is the only path that touches client_merge_log.
*/
export async function confirmMergeHandler(
req: Request,
req: NextRequest,
ctx: AuthContext,
params: { id?: string },
): Promise<NextResponse> {
try {
const id = params.id ?? '';
const body = (await req.json().catch(() => ({}))) as {
winnerId?: string;
fieldChoices?: MergeFieldChoices;
};
if (!body.winnerId) {
throw new ValidationError('winnerId is required');
}
const body = await parseBody(req, confirmMergeSchema);
const [candidate] = await db
.select()
@@ -111,7 +112,7 @@ export async function confirmMergeHandler(
loserId,
mergedBy: ctx.userId,
callerPortId: ctx.portId,
fieldChoices: body.fieldChoices,
fieldChoices: body.fieldChoices as MergeFieldChoices | undefined,
});
return NextResponse.json({ data: result });

View File

@@ -18,7 +18,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -43,7 +43,7 @@ export const DELETE = withAuth(async (_req, ctx, params) => {
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -44,7 +44,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -43,7 +43,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -4,11 +4,7 @@ import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { updateWebhookSchema } from '@/lib/validators/webhooks';
import {
getWebhook,
updateWebhook,
deleteWebhook,
} from '@/lib/services/webhooks.service';
import { getWebhook, updateWebhook, deleteWebhook } from '@/lib/services/webhooks.service';
// ─── GET /api/v1/admin/webhooks/[webhookId] ───────────────────────────────────
@@ -56,7 +52,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -8,8 +8,10 @@
*/
import { NextResponse } from 'next/server';
import { z } from 'zod';
import { type RouteHandler } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { db } from '@/lib/db';
import { berths } from '@/lib/db/schema/berths';
import { and, eq } from 'drizzle-orm';
@@ -17,17 +19,17 @@ import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors';
import { getMaxUploadMb } from '@/lib/services/berth-pdf.service';
import { getStorageBackend } from '@/lib/storage';
interface PostBody {
fileName: string;
const postBodySchema = z.object({
fileName: z.string().min(1).max(255),
/** Size hint in bytes — used to early-reject oversized uploads before we
* burn a presigned URL. */
sizeBytes?: number;
}
sizeBytes: z.number().int().nonnegative().optional(),
});
export const postHandler: RouteHandler = async (req, ctx, params) => {
try {
const body = (await req.json()) as Partial<PostBody>;
const fileName = (body.fileName ?? '').trim();
const body = await parseBody(req, postBodySchema);
const fileName = body.fileName.trim();
if (!fileName) throw new ValidationError('fileName is required');
// Tenant-scoped berth lookup. Without `eq(berths.portId, ctx.portId)` a

View File

@@ -7,23 +7,27 @@
*/
import { NextResponse } from 'next/server';
import { z } from 'zod';
import { type RouteHandler } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse, ValidationError } from '@/lib/errors';
import { listBerthPdfVersions, uploadBerthPdf } from '@/lib/services/berth-pdf.service';
interface PostBody {
storageKey: string;
fileName: string;
fileSizeBytes: number;
sha256: string;
parseResults?: {
engine: 'acroform' | 'ocr' | 'ai';
extracted?: Record<string, unknown>;
meanConfidence?: number;
warnings?: string[];
};
}
const postBodySchema = z.object({
storageKey: z.string().min(1),
fileName: z.string().min(1).max(255),
fileSizeBytes: z.number().int().positive(),
sha256: z.string().min(1),
parseResults: z
.object({
engine: z.enum(['acroform', 'ocr', 'ai']),
extracted: z.record(z.string(), z.unknown()).optional(),
meanConfidence: z.number().optional(),
warnings: z.array(z.string()).optional(),
})
.optional(),
});
export const getHandler: RouteHandler = async (_req, ctx, params) => {
try {
@@ -47,16 +51,7 @@ const STORAGE_KEY_RE =
export const postHandler: RouteHandler = async (req, ctx, params) => {
try {
const body = (await req.json()) as Partial<PostBody>;
if (!body.storageKey || !body.fileName) {
throw new ValidationError('storageKey and fileName are required');
}
if (typeof body.fileSizeBytes !== 'number' || body.fileSizeBytes <= 0) {
throw new ValidationError('fileSizeBytes must be a positive integer');
}
if (!body.sha256 || typeof body.sha256 !== 'string') {
throw new ValidationError('sha256 is required');
}
const body = await parseBody(req, postBodySchema);
const expectedPrefix = `berths/${params.id!}/uploads/`;
if (!body.storageKey.startsWith(expectedPrefix) || !STORAGE_KEY_RE.test(body.storageKey)) {
throw new ValidationError(

View File

@@ -1,25 +1,23 @@
import { NextResponse } from 'next/server';
import { z } from 'zod';
import { type RouteHandler } from '@/lib/api/helpers';
import { errorResponse, ValidationError } from '@/lib/errors';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { applyParseResults, type ExtractedBerthFields } from '@/lib/services/berth-pdf.service';
interface PostBody {
versionId: string;
fieldsToApply: Partial<ExtractedBerthFields>;
}
const postBodySchema = z.object({
versionId: z.string().min(1),
fieldsToApply: z.record(z.string(), z.unknown()),
});
export const postHandler: RouteHandler = async (req, ctx, params) => {
try {
const body = (await req.json()) as Partial<PostBody>;
if (!body.versionId) throw new ValidationError('versionId is required');
if (!body.fieldsToApply || typeof body.fieldsToApply !== 'object') {
throw new ValidationError('fieldsToApply must be an object');
}
const body = await parseBody(req, postBodySchema);
const result = await applyParseResults(
params.id!,
body.versionId,
body.fieldsToApply,
body.fieldsToApply as Partial<ExtractedBerthFields>,
ctx.portId,
);
return NextResponse.json({ data: result });

View File

@@ -46,7 +46,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -38,7 +38,7 @@ export const POST = withAuth(
});
if (!existing) throw new NotFoundError('portal user');
await resendActivation(existing.id, ctx.portId);
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
}
const body = await parseBody(req, inviteSchema);

View File

@@ -20,7 +20,7 @@ export const PUT = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -20,7 +20,7 @@ export const PUT = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -2,6 +2,7 @@ import { NextResponse } from 'next/server';
import { z } from 'zod';
import { withAuth } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { convert } from '@/lib/services/currency';
@@ -13,8 +14,7 @@ const convertSchema = z.object({
export const POST = withAuth(async (req, _ctx) => {
try {
const body = await req.json();
const { amount, from, to } = convertSchema.parse(body);
const { amount, from, to } = await parseBody(req, convertSchema);
const result = await convert(amount, from, to);

View File

@@ -2,7 +2,7 @@ import { NextRequest, NextResponse } from 'next/server';
import { z } from 'zod';
import { withAuth } from '@/lib/api/helpers';
import { parseQuery } from '@/lib/api/route-helpers';
import { parseBody, parseQuery } from '@/lib/api/route-helpers';
import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors';
import { requirePermission } from '@/lib/auth/permissions';
import { setValuesSchema } from '@/lib/validators/custom-fields';
@@ -91,8 +91,7 @@ export const PUT = withAuth(async (req: NextRequest, ctx, params) => {
const { entityType } = parseQuery(req, querySchema);
gateForEdit(entityType, ctx);
const body = await req.json();
const { values } = setValuesSchema.parse(body);
const { values } = await parseBody(req, setValuesSchema);
const result = await setValues(
entityId,

View File

@@ -1,6 +1,7 @@
import { NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { exportCsv } from '@/lib/services/expense-export';
import { listExpensesSchema } from '@/lib/validators/expenses';
@@ -9,8 +10,7 @@ import { createAuditLog } from '@/lib/audit';
export const POST = withAuth(
withPermission('expenses', 'view', async (req, ctx) => {
try {
const body = await req.json().catch(() => ({}));
const query = listExpensesSchema.parse(body);
const query = await parseBody(req, listExpensesSchema);
const csv = await exportCsv(ctx.portId, query);
void createAuditLog({

View File

@@ -1,6 +1,7 @@
import { NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { exportParentCompany } from '@/lib/services/expense-export';
import { listExpensesSchema } from '@/lib/validators/expenses';
@@ -11,8 +12,7 @@ import { listExpensesSchema } from '@/lib/validators/expenses';
export const POST = withAuth(
withPermission('expenses', 'export', async (req, ctx) => {
try {
const body = await req.json().catch(() => ({}));
const query = listExpensesSchema.parse(body);
const query = await parseBody(req, listExpensesSchema);
const pdf = await exportParentCompany(ctx.portId, query);
return new NextResponse(Buffer.from(pdf), {

View File

@@ -1,6 +1,7 @@
import { NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import { streamExpensePdf } from '@/lib/services/expense-pdf.service';
import { exportExpensePdfSchema } from '@/lib/validators/expenses';
@@ -32,8 +33,7 @@ export const dynamic = 'force-dynamic';
export const POST = withAuth(
withPermission('expenses', 'export', async (req, ctx) => {
try {
const body = await req.json().catch(() => ({}));
const input = exportExpensePdfSchema.parse(body);
const input = await parseBody(req, exportExpensePdfSchema);
const { stream, suggestedFilename } = await streamExpensePdf({
portId: ctx.portId,

View File

@@ -20,6 +20,8 @@ export const POST = withAuth(
const metadata = uploadFileSchema.parse({
filename: (formData.get('filename') as string | null) ?? file.name,
clientId: formData.get('clientId') as string | undefined,
yachtId: formData.get('yachtId') as string | undefined,
companyId: formData.get('companyId') as string | undefined,
category: formData.get('category') as string | undefined,
entityType: formData.get('entityType') as string | undefined,
entityId: formData.get('entityId') as string | undefined,

View File

@@ -13,7 +13,7 @@ export const POST = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -3,11 +3,7 @@ import { NextResponse } from 'next/server';
import { withAuth, withPermission } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { errorResponse } from '@/lib/errors';
import {
getInterestById,
updateInterest,
archiveInterest,
} from '@/lib/services/interests.service';
import { getInterestById, updateInterest, archiveInterest } from '@/lib/services/interests.service';
import { updateInterestSchema } from '@/lib/validators/interests';
export const GET = withAuth(
@@ -47,7 +43,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -9,7 +9,7 @@ export const PATCH = withAuth(async (_req, ctx, params) => {
const { notificationId } = params;
if (!notificationId) throw new NotFoundError('Notification');
await notificationsService.markRead(notificationId, ctx.userId);
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -7,7 +7,7 @@ import * as notificationsService from '@/lib/services/notifications.service';
export const POST = withAuth(async (_req, ctx) => {
try {
await notificationsService.markAllRead(ctx.userId, ctx.portId);
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -43,7 +43,7 @@ export const DELETE = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -2,6 +2,7 @@ import { NextRequest, NextResponse } from 'next/server';
import { sql } from 'drizzle-orm';
import { withAuth } from '@/lib/api/helpers';
import { parseBody } from '@/lib/api/route-helpers';
import { db } from '@/lib/db';
import { errorResponse } from '@/lib/errors';
import { getRecentlyViewed, trackView } from '@/lib/services/recently-viewed.service';
@@ -255,7 +256,7 @@ export const GET = withAuth(async (req: NextRequest, ctx) => {
const pairs = await getRecentlyViewed(ctx.userId, ctx.portId, limit);
const items = await hydrate(ctx.portSlug, ctx.portId, pairs);
return NextResponse.json({ items });
return NextResponse.json({ data: items });
} catch (error) {
return errorResponse(error);
}
@@ -263,12 +264,9 @@ export const GET = withAuth(async (req: NextRequest, ctx) => {
export const POST = withAuth(async (req: NextRequest, ctx) => {
try {
const body = await req.json();
const parsed = trackViewSchema.parse(body);
const parsed = await parseBody(req, trackViewSchema);
trackView(ctx.userId, ctx.portId, parsed.type, parsed.id);
return NextResponse.json({ ok: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -20,7 +20,7 @@ export const PUT = withAuth(
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
});
return NextResponse.json({ success: true });
return new NextResponse(null, { status: 204 });
} catch (error) {
return errorResponse(error);
}

View File

@@ -169,11 +169,12 @@ export function CustomFieldsManager() {
<div className="rounded-md border border-amber-300 bg-amber-50 px-3 py-2.5 text-xs text-amber-900">
<strong>Heads up:</strong> custom fields render in detail-page sidebars and the entity
export, but they don&rsquo;t plug into core platform behaviour: search doesn&rsquo;t index
them, the recommender doesn&rsquo;t score on them, audit logs don&rsquo;t diff them, and
merge-tokens won&rsquo;t expand them in EOI/contract templates. Use them for rep-only
annotations (e.g. &ldquo;Berth visit notes&rdquo;, &ldquo;Referral source&rdquo;) anything
load-bearing for the deal flow needs a first-class column.
export, and merge-tokens of the form{' '}
<code className="rounded bg-amber-100 px-1">{`{{custom.fieldName}}`}</code> now expand in
EOI/contract/email templates for client/interest/berth contexts. They still don&rsquo;t plug
into the global search index, the berth recommender, or the entity-diff audit log use them
for rep-only annotations and template-merge values, but anything load-bearing for the deal
flow still needs a first-class column.
</div>
<Tabs value={activeTab} onValueChange={(v) => setActiveTab(v as EntityTab)}>

View File

@@ -0,0 +1,88 @@
'use client';
import { useState } from 'react';
import { useQueryClient } from '@tanstack/react-query';
import { FileGrid } from '@/components/files/file-grid';
import { FileUploadZone } from '@/components/files/file-upload-zone';
import { FilePreviewDialog } from '@/components/files/file-preview-dialog';
import { PermissionGate } from '@/components/shared/permission-gate';
import { usePaginatedQuery } from '@/hooks/use-paginated-query';
import { useRealtimeInvalidation } from '@/hooks/use-realtime-invalidation';
import { apiFetch } from '@/lib/api/client';
import type { FileRow } from '@/components/files/file-grid';
interface CompanyFilesTabProps {
companyId: string;
}
export function CompanyFilesTab({ companyId }: CompanyFilesTabProps) {
const queryClient = useQueryClient();
const [previewFile, setPreviewFile] = useState<FileRow | null>(null);
const { data, isLoading } = usePaginatedQuery<FileRow>({
queryKey: ['files', { companyId }],
endpoint: `/api/v1/files?companyId=${encodeURIComponent(companyId)}`,
filterDefinitions: [],
});
useRealtimeInvalidation({
'file:uploaded': [['files', { companyId }]],
'file:updated': [['files', { companyId }]],
'file:deleted': [['files', { companyId }]],
});
const handleDownload = async (file: FileRow) => {
try {
const res = await apiFetch<{ data: { url: string; filename: string } }>(
`/api/v1/files/${file.id}/download`,
);
const a = document.createElement('a');
a.href = res.data.url;
a.download = res.data.filename;
a.click();
} catch {
// silent
}
};
const handleDelete = async (file: FileRow) => {
if (!confirm(`Delete "${file.filename}"? This cannot be undone.`)) return;
try {
await apiFetch(`/api/v1/files/${file.id}`, { method: 'DELETE' });
queryClient.invalidateQueries({ queryKey: ['files', { companyId }] });
} catch {
// silent
}
};
return (
<div className="space-y-4">
<PermissionGate resource="files" action="upload">
<FileUploadZone
companyId={companyId}
onUploadComplete={() => {
queryClient.invalidateQueries({ queryKey: ['files', { companyId }] });
}}
/>
</PermissionGate>
<FileGrid
files={data}
onDownload={handleDownload}
onPreview={setPreviewFile}
onRename={() => {}}
onDelete={handleDelete}
isLoading={isLoading}
/>
<FilePreviewDialog
open={!!previewFile}
onOpenChange={(open) => !open && setPreviewFile(null)}
fileId={previewFile?.id}
fileName={previewFile?.filename}
mimeType={previewFile?.mimeType ?? undefined}
/>
</div>
);
}

View File

@@ -11,6 +11,7 @@ import { NotesList } from '@/components/shared/notes-list';
import { EntityActivityFeed } from '@/components/shared/entity-activity-feed';
import { CompanyMembersTab } from '@/components/companies/company-members-tab';
import { CompanyOwnedYachtsTab } from '@/components/companies/company-owned-yachts-tab';
import { CompanyFilesTab } from '@/components/companies/company-files-tab';
import { AddressesEditor, type Address } from '@/components/shared/addresses-editor';
import { apiFetch } from '@/lib/api/client';
import type { CountryCode } from '@/lib/i18n/countries';
@@ -226,9 +227,11 @@ export function getCompanyTabs({
/>
),
},
// The Documents tab was a "Coming soon" stub. Hidden until the
// /api/v1/files endpoint accepts a companyId filter (the schema
// supports it; the validator doesn't).
{
id: 'documents',
label: 'Documents',
content: <CompanyFilesTab companyId={companyId} />,
},
{
id: 'notes',
label: 'Notes',

View File

@@ -16,6 +16,8 @@ interface FileUploadZoneProps {
entityType?: string;
entityId?: string;
clientId?: string;
yachtId?: string;
companyId?: string;
onUploadComplete?: () => void;
}
@@ -23,6 +25,8 @@ export function FileUploadZone({
entityType,
entityId,
clientId,
yachtId,
companyId,
onUploadComplete,
}: FileUploadZoneProps) {
const [isDragOver, setIsDragOver] = useState(false);
@@ -46,6 +50,8 @@ export function FileUploadZone({
formData.append('file', file);
formData.append('filename', file.name);
if (clientId) formData.append('clientId', clientId);
if (yachtId) formData.append('yachtId', yachtId);
if (companyId) formData.append('companyId', companyId);
if (entityType) formData.append('entityType', entityType);
if (entityId) formData.append('entityId', entityId);
@@ -54,8 +60,7 @@ export function FileUploadZone({
);
// Use fetch directly for FormData (apiFetch JSON-encodes body)
const portId = (await import('@/stores/ui-store'))
.useUIStore.getState().currentPortId;
const portId = (await import('@/stores/ui-store')).useUIStore.getState().currentPortId;
const headers = new Headers();
if (portId) headers.set('X-Port-Id', portId);
const uploadRes = await fetch('/api/v1/files/upload', {
@@ -73,9 +78,7 @@ export function FileUploadZone({
);
} catch {
setUploading((prev) =>
prev.map((u) =>
u.id === uploadId ? { ...u, error: 'Upload failed' } : u,
),
prev.map((u) => (u.id === uploadId ? { ...u, error: 'Upload failed' } : u)),
);
}
}),
@@ -87,7 +90,7 @@ export function FileUploadZone({
onUploadComplete?.();
}, 1500);
},
[clientId, entityType, entityId, onUploadComplete],
[clientId, yachtId, companyId, entityType, entityId, onUploadComplete],
);
const handleDrop = useCallback(
@@ -135,9 +138,7 @@ export function FileUploadZone({
>
<Upload className="h-8 w-8 text-muted-foreground mb-2" />
<p className="text-sm font-medium">Drop files here or click to upload</p>
<p className="text-xs text-muted-foreground mt-1">
PDF, Word, Excel, images up to 50MB
</p>
<p className="text-xs text-muted-foreground mt-1">PDF, Word, Excel, images up to 50MB</p>
<input
ref={inputRef}
type="file"
@@ -169,9 +170,7 @@ export function FileUploadZone({
{u.error && (
<button
type="button"
onClick={() =>
setUploading((prev) => prev.filter((x) => x.id !== u.id))
}
onClick={() => setUploading((prev) => prev.filter((x) => x.id !== u.id))}
>
<X className="h-3.5 w-3.5 text-muted-foreground" />
</button>

View File

@@ -225,10 +225,10 @@ export function useSearch(query: string, opts: UseSearchOptions = {}) {
staleTime: 60_000,
});
const recentlyViewedQuery = useQuery<{ items: RecentlyViewedItem[] }>({
const recentlyViewedQuery = useQuery<{ data: RecentlyViewedItem[] }>({
queryKey: ['search', 'recently-viewed'],
queryFn: ({ signal }) =>
apiFetch<{ items: RecentlyViewedItem[] }>('/api/v1/search/recently-viewed', { signal }),
apiFetch<{ data: RecentlyViewedItem[] }>('/api/v1/search/recently-viewed', { signal }),
staleTime: 30_000,
});
@@ -238,7 +238,7 @@ export function useSearch(query: string, opts: UseSearchOptions = {}) {
isFetching: searchQuery.isFetching,
enabled,
recentSearches: recentSearchQuery.data?.searches ?? [],
recentlyViewed: recentlyViewedQuery.data?.items ?? [],
recentlyViewed: recentlyViewedQuery.data?.data ?? [],
};
}

View File

@@ -0,0 +1,34 @@
-- Reconcile the system_settings unique-index drift surfaced in the
-- final-deferred audit. The Drizzle schema declares a uniqueIndex on
-- (key, port_id), but Postgres treats NULL values as distinct by default.
-- That means two rows with `(same_key, NULL)` would BOTH be allowed —
-- a global-setting collision the index claims to prevent.
--
-- This was not just theoretical: the dev DB had 60+ duplicate
-- `(storage_backend, NULL)` rows from buggy non-upsert call sites that
-- predated the upsert hardening. Those rows accumulated invisibly because
-- the index allowed them. Step 1 dedupes (keeps the most recent row per
-- `(key, port_id)` group); step 2 rebuilds the unique index with
-- `NULLS NOT DISTINCT` (Postgres 15+) so future inserts can't recreate the
-- ambiguity.
-- Step 1: dedupe duplicate rows, keeping the row with the latest updated_at.
-- Uses a CTE + ROW_NUMBER() so the keeper is deterministic across reruns.
WITH ranked AS (
SELECT ctid,
ROW_NUMBER() OVER (
PARTITION BY "key", "port_id"
ORDER BY "updated_at" DESC, ctid DESC
) AS rn
FROM "system_settings"
)
DELETE FROM "system_settings"
USING ranked
WHERE "system_settings".ctid = ranked.ctid AND ranked.rn > 1;
-- Step 2: replace the unique index with one that treats NULLs as equal,
-- so global settings (port_id IS NULL) are unique by key alone.
DROP INDEX IF EXISTS "system_settings_key_port_idx";
CREATE UNIQUE INDEX "system_settings_key_port_idx"
ON "system_settings" ("key", "port_id")
NULLS NOT DISTINCT;

View File

@@ -135,9 +135,14 @@ export const systemSettings = pgTable(
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [
// Migration 0047 rebuilds this index with `NULLS NOT DISTINCT` so a
// global setting (port_id IS NULL) is unique by key alone — the
// default `NULLS DISTINCT` semantics let duplicates accumulate.
// Drizzle's `uniqueIndex` builder doesn't surface NULLS NOT DISTINCT,
// so the migration is the source of truth for that flag and
// `db:push` against an empty DB would skip it (matches the
// documented limitation for `berths.current_pdf_version_id`).
uniqueIndex('system_settings_key_port_idx').on(table.key, table.portId),
// Note: the PRIMARY KEY is `key` alone based on schema, but unique on (key, port_id)
// We use key as primary key per SQL schema
],
);

View File

@@ -76,15 +76,26 @@ export async function setAiBudget(
if (next.softCapTokens > next.hardCapTokens) {
throw new ValidationError('softCapTokens cannot exceed hardCapTokens');
}
// True upsert (atomic on the (key, port_id) NULLS NOT DISTINCT index
// — migration 0047). Replaces a delete-then-insert pattern that had a
// race window where two concurrent updates could both DELETE and both
// INSERT, accumulating duplicates.
await db
.delete(systemSettings)
.where(and(eq(systemSettings.key, KEY), eq(systemSettings.portId, portId)));
await db.insert(systemSettings).values({
key: KEY,
portId,
value: next as unknown as Record<string, unknown>,
updatedBy: userId,
});
.insert(systemSettings)
.values({
key: KEY,
portId,
value: next as unknown as Record<string, unknown>,
updatedBy: userId,
})
.onConflictDoUpdate({
target: [systemSettings.key, systemSettings.portId],
set: {
value: next as unknown as Record<string, unknown>,
updatedBy: userId,
updatedAt: new Date(),
},
});
return next;
}

View File

@@ -38,9 +38,12 @@ import {
berthPdfVersions,
clients,
clientContacts,
customFieldDefinitions,
customFieldValues,
interests,
ports,
} from '@/lib/db/schema';
import { inArray } from 'drizzle-orm';
import type { DocumentSend } from '@/lib/db/schema';
import { ForbiddenError, NotFoundError, ValidationError } from '@/lib/errors';
import { logger } from '@/lib/logger';
@@ -162,9 +165,93 @@ export async function buildMergeValues(
}
}
// Custom-field tokens (`{{custom.<fieldName>}}`). The validator allows
// any matching shape; the resolver here looks up real values per-port,
// per-entity and substitutes them. Unknown field names stay
// unresolved — `findUnresolvedTokens` flags them at preview time so
// the rep can edit the template before sending.
await mergeCustomFieldValues(values, portId, recipient, context);
return values;
}
interface CustomMergeContext {
berthId?: string;
brochureLabel?: string;
}
/**
* Resolve `{{custom.<fieldName>}}` tokens. Reads every per-port custom
* field definition for the entity types currently in scope (client,
* interest, berth) and joins to the actual stored value for each entity
* id we have on hand. Boolean values render as 'true' / 'false', dates
* as ISO yyyy-mm-dd, numbers as plain numerics, selects/text verbatim.
*/
async function mergeCustomFieldValues(
values: Record<string, string>,
portId: string,
recipient: SendRecipientInput,
context: CustomMergeContext,
): Promise<void> {
// Build the (entityType → entityId) map for the current send context.
const entityIdsByType = new Map<string, string>();
if (recipient.clientId) entityIdsByType.set('client', recipient.clientId);
if (recipient.interestId) entityIdsByType.set('interest', recipient.interestId);
if (context.berthId) entityIdsByType.set('berth', context.berthId);
if (entityIdsByType.size === 0) return;
const definitions = await db
.select()
.from(customFieldDefinitions)
.where(
and(
eq(customFieldDefinitions.portId, portId),
inArray(customFieldDefinitions.entityType, Array.from(entityIdsByType.keys())),
),
);
if (definitions.length === 0) return;
const fieldIds = definitions.map((d) => d.id);
const entityIds = Array.from(entityIdsByType.values());
const valueRows = await db
.select()
.from(customFieldValues)
.where(
and(
inArray(customFieldValues.fieldId, fieldIds),
inArray(customFieldValues.entityId, entityIds),
),
);
const valueByFieldEntity = new Map<string, unknown>();
for (const row of valueRows) {
valueByFieldEntity.set(`${row.fieldId}|${row.entityId}`, row.value);
}
for (const def of definitions) {
const entityId = entityIdsByType.get(def.entityType);
if (!entityId) continue;
const raw = valueByFieldEntity.get(`${def.id}|${entityId}`);
if (raw === undefined || raw === null) continue;
const token = `{{custom.${def.fieldName}}}`;
values[token] = stringifyCustomValue(raw, def.fieldType);
}
}
function stringifyCustomValue(raw: unknown, fieldType: string): string {
if (raw === null || raw === undefined) return '';
switch (fieldType) {
case 'boolean':
return raw ? 'true' : 'false';
case 'date':
return typeof raw === 'string' ? raw.slice(0, 10) : String(raw);
case 'number':
return String(raw);
default:
return typeof raw === 'string' ? raw : JSON.stringify(raw);
}
}
/**
* Render a body for the dry-run UI. Returns `{ html, unresolved }`. The UI
* uses `unresolved` to populate the warning chip; the rep can't submit
@@ -295,9 +382,18 @@ async function streamAttachmentOrLink(
// to the body. Per §11.1 the size decision is made BEFORE the SMTP relay,
// so we never produce duplicate sends.
const storage = await getStorageBackend();
// Bind the proxy token to the issuing port slug. The storage key is
// already structured `${portSlug}/...` via generateStorageKey() — this
// closes the loop so a buggy future call site that hands us a key from
// a different port can't mint a valid 24h URL for it.
const portRow = await db.query.ports.findFirst({
where: eq(ports.id, portId),
columns: { slug: true },
});
const { url } = await storage.presignDownload(attachment.storageKey, {
expirySeconds: 24 * 60 * 60,
filename: attachment.fileName,
portSlug: portRow?.slug,
});
// HTML-escape the filename: brochure filenames are admin-supplied and
// could in theory carry markup (e.g. `"><script>...`). Even a benign

View File

@@ -71,6 +71,8 @@ export async function uploadFile(
.values({
portId,
clientId: data.clientId ?? null,
yachtId: data.yachtId ?? null,
companyId: data.companyId ?? null,
filename: sanitizedFilename,
originalName: sanitizedOriginal,
mimeType: file.mimeType,
@@ -219,13 +221,19 @@ export async function deleteFile(id: string, portId: string, meta: AuditMeta) {
// ─── List ─────────────────────────────────────────────────────────────────────
export async function listFiles(portId: string, query: ListFilesInput) {
const { page, limit, sort, order, search, clientId, category } = query;
const { page, limit, sort, order, search, clientId, yachtId, companyId, category } = query;
const filters = [];
if (clientId) {
filters.push(eq(files.clientId, clientId));
}
if (yachtId) {
filters.push(eq(files.yachtId, yachtId));
}
if (companyId) {
filters.push(eq(files.companyId, companyId));
}
if (category) {
filters.push(eq(files.category, category));
}

View File

@@ -66,20 +66,27 @@ async function readRow(portId: string | null): Promise<StoredOcrConfig | null> {
}
async function writeRow(portId: string | null, value: StoredOcrConfig, userId: string) {
// upsert: delete + insert keeps logic simple given the (key, port_id) unique index.
// True upsert. The previous delete-then-insert pattern had a race
// window where two concurrent writes could both DELETE and both INSERT,
// accumulating duplicate rows (caught and dedupe'd by migration 0047).
// The (key, port_id) NULLS NOT DISTINCT unique index makes this
// upsert atomic.
await db
.delete(systemSettings)
.where(
portId === null
? and(eq(systemSettings.key, KEY), isNull(systemSettings.portId))
: and(eq(systemSettings.key, KEY), eq(systemSettings.portId, portId)),
);
await db.insert(systemSettings).values({
key: KEY,
portId,
value: value as unknown as Record<string, unknown>,
updatedBy: userId,
});
.insert(systemSettings)
.values({
key: KEY,
portId,
value: value as unknown as Record<string, unknown>,
updatedBy: userId,
})
.onConflictDoUpdate({
target: [systemSettings.key, systemSettings.portId],
set: {
value: value as unknown as Record<string, unknown>,
updatedBy: userId,
updatedAt: new Date(),
},
});
}
/**

View File

@@ -130,23 +130,28 @@ export async function saveStages(args: SaveStagesArgs, meta: AuditMeta): Promise
}
}
// Upsert the stage list.
// Upsert the stage list. Read first for the audit-log diff; the actual
// write goes through onConflictDoUpdate so concurrent admin saves can't
// race-insert duplicates (migration 0047 made the index NULLS NOT DISTINCT).
const existing = await db.query.systemSettings.findFirst({
where: and(eq(systemSettings.key, SETTING_KEY), eq(systemSettings.portId, args.portId)),
});
if (existing) {
await db
.update(systemSettings)
.set({ value: args.stages, updatedBy: meta.userId, updatedAt: new Date() })
.where(and(eq(systemSettings.key, SETTING_KEY), eq(systemSettings.portId, args.portId)));
} else {
await db.insert(systemSettings).values({
await db
.insert(systemSettings)
.values({
key: SETTING_KEY,
value: args.stages,
portId: args.portId,
updatedBy: meta.userId,
})
.onConflictDoUpdate({
target: [systemSettings.key, systemSettings.portId],
set: {
value: args.stages,
updatedBy: meta.userId,
updatedAt: new Date(),
},
});
}
void createAuditLog({
userId: meta.userId,

View File

@@ -38,23 +38,30 @@ export async function getSetting(key: string, portId: string) {
}
export async function upsertSetting(key: string, value: unknown, portId: string, meta: AuditMeta) {
// Read existing first for the audit-log diff (before/after). The actual
// write goes through onConflictDoUpdate so two concurrent calls can't
// both observe `existing=null` and both INSERT — the (key, port_id)
// unique index now treats NULLs as equal (migration 0047).
const existing = await db.query.systemSettings.findFirst({
where: and(eq(systemSettings.key, key), eq(systemSettings.portId, portId)),
});
if (existing) {
await db
.update(systemSettings)
.set({ value, updatedBy: meta.userId, updatedAt: new Date() })
.where(and(eq(systemSettings.key, key), eq(systemSettings.portId, portId)));
} else {
await db.insert(systemSettings).values({
await db
.insert(systemSettings)
.values({
key,
value,
value: value as Record<string, unknown>,
portId,
updatedBy: meta.userId,
})
.onConflictDoUpdate({
target: [systemSettings.key, systemSettings.portId],
set: {
value: value as Record<string, unknown>,
updatedBy: meta.userId,
updatedAt: new Date(),
},
});
}
void createAuditLog({
userId: meta.userId,

View File

@@ -97,6 +97,17 @@ interface ProxyTokenPayload {
f?: string;
/** Optional content-type override. */
c?: string;
/**
* Port-binding: the port slug the issuer was scoped to when minting
* the token. The verifier asserts the storage key starts with
* `${p}/`. Defense-in-depth against a buggy issuer in some future
* code path that mixes up port scopes — every storage key generated
* by `generateStorageKey()` already prefixes the port slug, so this
* check costs nothing and catches any drift. Optional for backwards
* compatibility with tokens minted before this field shipped; new
* tokens always include it.
*/
p?: string;
}
function b64urlEncode(buf: Buffer): string {
@@ -165,6 +176,16 @@ export function verifyProxyToken(
if (payload.op !== expectedOp) {
return { ok: false, reason: 'op-mismatch' };
}
// Port-binding: when the issuer attached `p`, assert the key starts
// with `${p}/`. This is the actual enforcement — `validateStorageKey`
// already prevents path traversal but doesn't constrain which port's
// namespace the key belongs to. Tokens without `p` skip this check
// (legacy / non-port-scoped issuers continue to work).
if (payload.p !== undefined) {
if (!payload.k.startsWith(`${payload.p}/`)) {
return { ok: false, reason: 'port-mismatch' };
}
}
return { ok: true, payload };
}
@@ -301,7 +322,14 @@ export class FilesystemBackend implements StorageBackend {
validateStorageKey(key);
const expiresAt = Math.floor(Date.now() / 1000) + (opts.expirySeconds ?? 900);
const token = signProxyToken(
{ k: key, e: expiresAt, n: randomUUID(), op: 'put', c: opts.contentType },
{
k: key,
e: expiresAt,
n: randomUUID(),
op: 'put',
c: opts.contentType,
...(opts.portSlug ? { p: opts.portSlug } : {}),
},
this.hmacSecret,
);
return { url: `/api/storage/${token}`, method: 'PUT' };
@@ -319,6 +347,7 @@ export class FilesystemBackend implements StorageBackend {
op: 'get',
f: opts.filename,
c: opts.contentType,
...(opts.portSlug ? { p: opts.portSlug } : {}),
},
this.hmacSecret,
);

View File

@@ -38,6 +38,15 @@ export interface PresignOpts {
contentType?: string;
/** Filename used in Content-Disposition for downloads. */
filename?: string;
/**
* Optional port slug to bind the token to. The filesystem proxy
* verifier asserts the storage key starts with `${portSlug}/` when
* present. S3 backend ignores this field (presigned S3 URLs carry
* their own signature scope). Pass it whenever the issuer is in a
* port-scoped request — `generateStorageKey()` already prefixes the
* slug, so this is the matching enforcement.
*/
portSlug?: string;
}
export interface StorageBackend {
@@ -216,9 +225,10 @@ export async function presignDownloadUrl(
key: string,
expirySeconds = 900,
filename?: string,
portSlug?: string,
): Promise<string> {
const backend = await getStorageBackend();
const { url } = await backend.presignDownload(key, { expirySeconds, filename });
const { url } = await backend.presignDownload(key, { expirySeconds, filename, portSlug });
return url;
}

View File

@@ -91,3 +91,25 @@ export const MERGE_FIELDS: MergeFieldCatalog = {
export const VALID_MERGE_TOKENS: ReadonlySet<string> = new Set(
Object.values(MERGE_FIELDS).flatMap((scope) => scope.map((field) => field.token)),
);
/**
* Custom-field merge tokens follow the pattern `{{custom.<fieldName>}}`
* where fieldName matches the per-port custom field's `fieldName` (the
* machine identifier, not the display label). Field names are validated
* at definition time as `[a-z][a-z0-9_]*` so this regex is the matching
* recogniser. Tokens that match this shape bypass the static
* `VALID_MERGE_TOKENS` check — the resolver fetches the actual
* definitions per port at expand time.
*/
export const CUSTOM_MERGE_TOKEN_RE = /^\{\{custom\.[a-z][a-z0-9_]*\}\}$/;
/** True when the token is a `{{custom.<fieldName>}}` shape. */
export function isCustomMergeToken(token: string): boolean {
return CUSTOM_MERGE_TOKEN_RE.test(token);
}
/** Extract the fieldName from a `{{custom.<fieldName>}}` token. */
export function extractCustomFieldName(token: string): string | null {
const match = token.match(/^\{\{custom\.([a-z][a-z0-9_]*)\}\}$/);
return match?.[1] ?? null;
}

View File

@@ -1,16 +1,24 @@
import { z } from 'zod';
import { baseListQuerySchema } from '@/lib/api/list-query';
import { VALID_MERGE_TOKENS } from '@/lib/templates/merge-fields';
import { VALID_MERGE_TOKENS, isCustomMergeToken } from '@/lib/templates/merge-fields';
// A token is acceptable if it's in the static catalog OR matches the
// dynamic `{{custom.<fieldName>}}` shape. The resolver checks the actual
// per-port custom-field definition at expand time and substitutes the
// stored value (or leaves the token unresolved if no definition matches).
function isAcceptableMergeToken(token: string): boolean {
return VALID_MERGE_TOKENS.has(token) || isCustomMergeToken(token);
}
const mergeFieldsSchema = z
.array(z.string())
.optional()
.default([])
.refine(
(tokens) => tokens.every((t) => VALID_MERGE_TOKENS.has(t)),
(tokens) => tokens.every(isAcceptableMergeToken),
(tokens) => {
const unknown = tokens?.filter((t) => !VALID_MERGE_TOKENS.has(t)) ?? [];
const unknown = tokens?.filter((t) => !isAcceptableMergeToken(t)) ?? [];
return { message: `Unknown merge tokens: ${unknown.join(', ')}` };
},
);

View File

@@ -5,6 +5,8 @@ import { baseListQuerySchema } from '@/lib/api/list-query';
export const uploadFileSchema = z.object({
filename: z.string().min(1).max(255),
clientId: z.string().optional(),
yachtId: z.string().optional(),
companyId: z.string().optional(),
category: z.string().optional(),
entityType: z.string().optional(),
entityId: z.string().optional(),
@@ -17,6 +19,8 @@ export const updateFileSchema = z.object({
export const listFilesSchema = baseListQuerySchema.extend({
clientId: z.string().optional(),
yachtId: z.string().optional(),
companyId: z.string().optional(),
category: z.string().optional(),
entityType: z.string().optional(),
entityId: z.string().optional(),