diff --git a/.gitignore b/.gitignore index 5bcc94c..c45f93c 100644 --- a/.gitignore +++ b/.gitignore @@ -47,3 +47,9 @@ docker-compose.override.yml /.claude/ /.serena/ /ruvector.db + +# Filesystem storage backend root (FilesystemBackend default location) +/storage/ + +# Local berth-PDF + brochure samples used as upload fixtures during dev. +/berth_pdf_example/ diff --git a/CLAUDE.md b/CLAUDE.md index af30dfd..860ab5f 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -95,6 +95,16 @@ src/ - **Inline editing pattern:** detail pages (clients, yachts, companies, interests, residential clients/interests) use `` (`src/components/shared/inline-editable-field.tsx`) for click-to-edit text/select/textarea fields and `` (`src/components/shared/inline-tag-editor.tsx`) for tag chips. Each entity exposes a `PUT /api/v1//[id]/tags` endpoint backed by a `setTags` service helper that wipes-and-rewrites the join table inside a single transaction. There are no separate "Edit" modal forms on detail pages — the entire overview tab is editable in place. - **Notes (polymorphic across entity types):** `notes.service.ts` dispatches across `clientNotes`, `interestNotes`, `yachtNotes`, `companyNotes` based on an `entityType` discriminator. `` works for all four. `companyNotes` lacks an `updatedAt` column — the service substitutes `createdAt` so callers get a uniform shape. - **Route handler exports:** Next.js App Router `route.ts` files only allow specific named exports (`GET|POST|…`). Service-tested handler functions live in sibling `handlers.ts` files (e.g. `src/app/api/v1/yachts/[id]/handlers.ts`) and are imported by the colocated `route.ts` for `withAuth(withPermission(...))` wrapping. Integration tests import from `handlers.ts` directly to bypass auth/permission middleware. +- **Multi-berth interest model:** `interest_berths` is the source of truth for which berths an interest is linked to; `interests.berth_id` does not exist (dropped in migration 0029). Three role flags: `is_primary` (≤1 row per interest, enforced by partial unique index — surfaces as "the berth for this deal" in templates / forms / list views), `is_specific_interest` (true → berth shows as "Under Offer" on the public map; false → legal/EOI-only link), `is_in_eoi_bundle` (covered by the interest's EOI signature). Read/write through `src/lib/services/interest-berths.service.ts` helpers (`getPrimaryBerth`, `getPrimaryBerthsForInterests`, `upsertInterestBerth`, `setPrimaryBerth`, `removeInterestBerth`); never query `interest_berths` from outside that service. +- **Mooring number canonical format:** `^[A-Z]+\d+$` (e.g. `A1`, `B12`, `E18`) — no hyphen, no leading zeros. Stored, displayed, URL-encoded, and rendered in EOIs in this exact form. Phase 0 normalized the entire CRM dataset; the mooring-pattern regex gates the public `/api/public/berths/[mooringNumber]` route before any DB hit. +- **Public berths API:** `/api/public/berths` (list) and `/api/public/berths/[mooringNumber]` (single) are the public-facing data feed for the marketing website. Output shape mirrors the legacy NocoDB Berths shape verbatim (`"Mooring Number"`, `"Side Pontoon"`, etc.) — see `src/lib/services/public-berths.ts`. Cache headers: `s-maxage=300, stale-while-revalidate=60`. Status mapping: `"Sold"` (berth.status=sold) > `"Under Offer"` (status=under_offer OR has any active `interest_berths.is_specific_interest=true` link with `interests.outcome IS NULL`) > `"Available"`. The companion `/api/public/health` endpoint returns `{env, appUrl}` so the website refuses to start when its `CRM_PUBLIC_URL` points at a different deployment env. +- **Berth recommender:** Pure SQL ranking (no AI). Lives in `src/lib/services/berth-recommender.service.ts`. Tier ladder A/B/C/D classifies each feasible berth based on its `interest_berths` aggregates. Heat scoring (recency / furthest stage / interest count / EOI count) only fires for tier B (lost/cancelled-only history); per-port admin tunes weights via `system_settings` keys (`heat_weight_*`, `recommender_max_oversize_pct`, `recommender_top_n_default`, `fallthrough_policy`, `fallthrough_cooldown_days`, `tier_ladder_hide_late_stage`). The recommender enforces multi-port isolation both at the entry point (rejects cross-port interest lookups) AND inside the SQL aggregates CTE (defense-in-depth `i.port_id` filter). +- **EOI bundle / range formatter:** Multi-berth EOIs render the in-bundle berth set as a compact range string ("A1-A3, B5-B7") via `formatBerthRange()` in `src/lib/templates/berth-range.ts`. Used only inside the Documenso `Berth Range` form field — CRM UI always shows berths as individual chips. The `{{eoi.berthRange}}` token is in `VALID_MERGE_TOKENS`. +- **Pluggable storage backend:** Code never imports MinIO/S3 directly. All file I/O goes through `getStorageBackend()` from `src/lib/storage/`. Configured via `system_settings.storage_backend` ('s3' | 'filesystem'). Switching backends is a settings change + `pnpm tsx scripts/migrate-storage.ts` run. **Filesystem backend is single-node only**: refuses to start when `MULTI_NODE_DEPLOYMENT=true`. Multi-node deployments must use the s3-compatible backend. +- **Per-berth PDFs:** Versioned via `berth_pdf_versions`; `berths.current_pdf_version_id` always points to the latest active version. Storage key is UUID-based per upload (not version-numbered) so concurrent uploads can't collide on blob paths; `pg_advisory_xact_lock` per berth_id serializes the version-number allocation. 3-tier parser: AcroForm → OCR (Tesseract.js with positional heuristics) → optional AI (rep clicks "AI parse" only when OCR confidence is low). Magic-byte (`%PDF-`) check enforced on BOTH the in-server upload path AND the presigned-PUT path (the post-upload service streams the first 5 bytes via the storage backend). Mooring-number mismatch between PDF and target berth surfaces as a service-level `ConflictError` unless the apply call passes `confirmMooringMismatch: true`. +- **Brochures:** Per-port; default brochure marked via `is_default` (enforced by partial unique index on `(port_id) WHERE is_default=true AND archived_at IS NULL`). Archived brochures retain version history. Same upload flow as berth PDFs (presign + magic-byte verification on the post-upload register endpoint). +- **Send-from accounts (sales send-outs):** Configurable via `system_settings`; defaults to `sales@portnimara.com` for human-touch and `noreply@portnimara.com` for automation. SMTP/IMAP passwords are AES-256-GCM encrypted at rest; the API never returns decrypted secrets — only `*PassIsSet` boolean markers. Send-out audit goes to `document_sends` (separate from `audit_logs` because of volume + binary refs). Body markdown is XSS-safe via `renderEmailBody()` (escape-then-allowlist; tested against the standard XSS vector list). Rate limit: 50 sends/user/hour individual. Pre-send size threshold: files > `email_attach_threshold_mb` ship as a 24h signed-URL link rather than an attachment (avoids the duplicate-send race from async bounces). The download-link fallback HTML-escapes the filename to prevent injection from admin-supplied brochure names. Bounce monitoring requires IMAP credentials in addition to SMTP — without them, the size-rejection banner stays disabled. +- **NocoDB berth import:** `pnpm tsx scripts/import-berths-from-nocodb.ts --apply --port-slug port-nimara` re-imports from the legacy NocoDB Berths table. Idempotent: rows where `updated_at > last_imported_at` (the "human edited this since last import" guard) are skipped unless `--force`. Adds `--update-snapshot` to also rewrite `src/lib/db/seed-data/berths.json`. Uses `pg_advisory_xact_lock` so two simultaneous runs serialize. Pure helpers in `src/lib/services/berth-import.ts` are unit-tested. - **Routes:** Multi-tenant via `[portSlug]` dynamic segment. Typed routes enabled. - **Pre-commit:** Husky + lint-staged runs ESLint fix + Prettier on staged `.ts`/`.tsx` files. The hook also blocks `.env*` files (including `.env.example`) from being committed; pass them via a separate workflow if needed. @@ -139,6 +149,14 @@ Domain-specific references: - `docs/eoi-documenso-field-mapping.md` — canonical mapping from `EoiContext` paths to the Documenso template's `formValues` keys, with the matching - AcroForm field names used by the in-app pathway. + AcroForm field names used by the in-app pathway. **Note:** the multi- + berth EOI bundle adds a new `Berth Range` form field populated by + `formatBerthRange()` from `src/lib/templates/berth-range.ts` — the live + Documenso template needs the field added before multi-berth EOIs render + with the compact range string instead of just the primary mooring. - `assets/README.md` — what the in-app EOI source PDF must contain and how to override its path in dev/test. +- `docs/berth-recommender-and-pdf-plan.md` — the comprehensive plan for the + Phase 0–8 berth-recommender + PDF + send-outs work bundle. Single source + of truth for the multi-berth interest model, recommender tier ladder, + pluggable storage, per-berth PDF parser, and sales send-out flows. diff --git a/docs/audit-final-deferred.md b/docs/audit-final-deferred.md new file mode 100644 index 0000000..6e79167 --- /dev/null +++ b/docs/audit-final-deferred.md @@ -0,0 +1,84 @@ +# Final audit deferred findings + +The pre-merge audit on `feat/berth-recommender` produced ~30 findings. The +critical + high-severity items were fixed in-branch. The items below are +medium / low severity and deferred to follow-up issues so the merge isn't +held up. Each entry is self-contained — pick one off and ship it. + +## Cross-cutting integration + +- **EOI in-app pathway silently swallows missing `Berth Range` AcroForm field** + — `src/lib/pdf/fill-eoi-form.ts:93`. `setText(form, 'Berth Range', ...)` + is wrapped in a try/catch that succeeds silently when the field is + absent. CLAUDE.md already warns ops about needing to add the field to + the live Documenso template; this code change would make the deployment + gap observable. Fix: when `context.eoiBerthRange` is non-empty AND the + field is absent, log at warn level + surface a structured response field. + +- **Email body merge expansion happens after token validation** — + `src/lib/services/document-sends.service.ts:399-403`. If a merge value + contains a `{{token}}` substring (e.g. a client name like + `"Acme {{discount}} Inc."`), the expanded body will contain a token + the unresolved-check missed and ships with literal braces. Fix: HTML- + escape merge values before expansion, OR run a second + `findUnresolvedTokens` against the expanded body. + +- **Filesystem dev-fallback HMAC secret can drift across processes** — + `src/lib/storage/filesystem.ts:328-331`. The dev-only fallback derives + the HMAC secret from `BETTER_AUTH_SECRET`. Two CRM processes running + with different secrets (web vs worker) reject each other's tokens. + Fix: assert `BETTER_AUTH_SECRET` is set when filesystem backend is + active in non-prod, or document the requirement loudly. + +- **Berth PDF apply path: numeric column nulling silently drops** — + `src/lib/services/berth-pdf.service.ts:473-475`. When + `Number.isFinite(n)` is false the apply loop `continue`s without + pushing to `applied` and without warning. Combined with the + "no appliable fields supplied" check (only fires when ALL drop), partial + silent drops are invisible. Fix: collect dropped keys and surface them. + +## Multi-tenant isolation hardening + +- **document_sends row stores `interestId` without verifying port match** — + `src/lib/services/document-sends.service.ts:422`. Audit-log pollution + rather than data exposure (the recipient lookup is port-checked already). + Fix: when `recipient.interestId` is set, fetch with + `and(eq(interests.id, ...), eq(interests.portId, input.portId))` and + throw if missing. + +- **Storage proxy token does not bind to port_id** — + `src/lib/storage/filesystem.ts:73-84`. ProxyTokenPayload is `{k, e, n, +f?, c?}` with a global HMAC. The current "issuer always checks port + first" relies on every issuer being correct in perpetuity. Fix: add a + `p` (portId) claim and have the proxy route resolve key→owner row + + assert `owner.portId === payload.p` before streaming. + +- **Documenso webhook does not enforce port_id on document lookups** — + `src/app/api/webhooks/documenso/route.ts:96-148`. Handlers dispatch by + global `documensoId`. If two ports' documents were ever issued the + same Documenso ID (replay across staging/prod, forwarded webhook from + a foreign instance), the wrong port's interest could be mutated. The + per-body `signatureHash` dedup is partial mitigation. Fix: either + (a) include the originating Documenso instance/team in the lookup, or + (b) verify `documents(documenso_id)` has a unique index port-wide. + +## Recent expense work polish + +- **renderReceiptHeader cursor math drifts after multi-step writes** — + `src/lib/services/expense-pdf.service.ts:854`. After + `doc.text(...)` with auto-flow, `doc.y` advances. Using `doc.y - +headerH + 10` after the rect+stroke block computes against the + post-rect position; works only because pdfkit's text-after-rect + hasn't moved y yet. Headers may misalign on the first receipt page + after a soft page break. Fix: capture `const baseY = doc.y` before + drawing the rect and compute all subsequent offsets relative to it. + +## Settings parsing + +- **`loadRecommenderSettings` rejects string-shaped JSONB booleans** — + `src/lib/services/berth-recommender.service.ts:116`. Postgres returns + JSONB `true/false` as JS booleans, but if an admin saves `"true"` + via a UI that wraps the value as a string, `asBool` returns null and + the per-port override silently falls through to defaults. Not a + security bug; a tuning footgun. Fix: accept `"true"`/`"false"` string + forms in `asBool`. diff --git a/docs/berth-feature-handoff-prompt.md b/docs/berth-feature-handoff-prompt.md new file mode 100644 index 0000000..240c1bd --- /dev/null +++ b/docs/berth-feature-handoff-prompt.md @@ -0,0 +1,147 @@ +# Handoff prompt for new Claude Code session + +Copy everything below the `---` line into the new chat as your first message. + +--- + +I'm continuing work on a comprehensive multi-feature push that was fully designed in a prior session but not yet implemented. The complete plan lives at `docs/berth-recommender-and-pdf-plan.md` (~1030 lines). **Read that file end-to-end before doing anything else — every design decision, schema change, edge case, and confirmed answer to a product question is captured there.** Don't re-litigate decisions; if something seems unclear, the answer is almost certainly in the plan. + +## What the project is + +A multi-tenant marina/port-management CRM at `/Users/matt/Repos/new-pn-crm`. Next.js 15 App Router, React 19, TypeScript strict, Drizzle ORM on Postgres, MinIO for files, BullMQ on Redis, better-auth, shadcn/ui, Tailwind. See `CLAUDE.md` for the conventions. + +## What we're building (high level) + +The plan bundles 8 capabilities into one branch (`feat/berth-recommender`): + +1. **/clients + /interests list-column fix** (the original bug — list views show `-` everywhere because the service didn't join contacts/yachts) +2. **Full NocoDB Berths import** + seeding + mooring-number normalization (current CRM has `A-01..E-18`; canonical is `A1..E18`) +3. **Schema refactor** to many-to-many `interest_berths` with role flags (`is_primary`, `is_specific_interest`, `is_in_eoi_bundle`) +4. **Berth recommender** (SQL ranking, tier ladder, heat scoring, UI panel) — no AI; pure SQL +5. **EOI bundle** support (multi-berth EOIs + range formatter for the Documenso PDF: `["A1","A2","A3","B5","B6"]` → `"A1-A3, B5-B6"`) +6. **Pluggable storage backend** (s3-compatible OR local filesystem) so admins can run without MinIO if they want +7. **Per-berth PDFs** (versioned uploads, OCR-based reverse parser, conflict-resolution diff dialog) +8. **Sales send-out emails** (berth PDF + brochure) with full audit + size-aware fallback to download links + +## Phase ordering (from plan §2) + +``` +Phase 0: Full NocoDB berth import + mooring normalization + 5 new pricing columns +Phase 1: /clients + /interests list column fix +Phase 2: M:M interest_berths schema refactor + desired dimensions on interests +Phase 3: CRM /api/public/berths endpoint + website cutover +Phase 4: Recommender SQL + tier ladder + heat + UI panel +Phase 5: EOI bundle + range formatter +Phase 6a: Pluggable storage backend + migration CLI + admin UI +Phase 6b: Per-berth PDF storage (versioned) + reverse parser +Phase 7: Sales send-outs + brochure admin + email-from settings +Phase 8: CLAUDE.md updates + final validation +``` + +**Start with Phase 0**. + +## Working tree state at handoff + +- Branch: `main` (you'll create `feat/berth-recommender` from here) +- Recent commits (already pushed): + - `8699f81 chore(style): codebase em-dash sweep + minor layout polish` + - `d62822c fix(migration): NocoDB import safety + dedup helpers + lead-source backfill` + - `089f4a6 feat(receipts): upload guide page + scanner head-tag fix` + - `77ad10c feat(dashboard): custom date range + KPI port-hydration gate` + - `e598cc0 feat(layout): unified Inbox + UserMenu extraction` + - `f5772ce feat(analytics): Umami integration with per-port admin settings` + - `49d34e0 feat(website-intake): dual-write endpoint + migration chain repair` +- Untracked / uncommitted at handoff: + - `docs/berth-recommender-and-pdf-plan.md` (the plan — read this first) + - `docs/berth-feature-handoff-prompt.md` (this file) + - `berth_pdf_example/` (two reference files — see below) + - `.env.example` (modified — adds `WEBSITE_INTAKE_SECRET=`; pre-commit hook blocks `.env*` files so user adds this manually) +- Dev DB state: + - 245 clients (210 with no `nationality_iso` — Phase 1 backfills from primary phone's `value_country`) + - 4 test rows in `website_submissions` (from a previous live audit; safe to ignore) + - 90 berths with `mooring_number` in `A-01` format (Phase 0 normalizes to `A1`) + - vitest: 956 tests passing + - tsc: clean (one pre-existing issue in `scripts/smoke-test-redirect.ts` that's unrelated) + +## Reference files + +- `berth_pdf_example/Berth_Spec_Sheet_A1.pdf` (358 KB) — sample per-berth PDF. **0 AcroForm fields** (confirmed via pdf-lib) so OCR with positional heuristics is the primary parser tier; the AcroForm tier is built defensively. Plan §9.2 captures the layout structure. +- `berth_pdf_example/Port-Nimara-Brochure-March-2025_5nT92g.pdf` (10.26 MB) — sample brochure. Sized so it ships as an attachment under the 15 MB threshold. Plan §11.1 covers brochure handling. + +## NocoDB access + +You have `mcp__NocoDB_Base_-_Port_Nimara__*` tools available. Tables you'll touch most: + +- `mczgos9hr3oa9qc` — Berths (Phase 0 imports from here; mooring numbers are stored as `A1..E18`) +- `mbs9hjauug4eseo` — Interests (the combined client+deal table the old system used) + +## Branch & commit conventions + +- Create the branch: `git checkout -b feat/berth-recommender` +- Commit messages match recent history style: `(): ` lowercase, terse subject, body explains why not what. +- **Pre-commit hook blocks any `.env*` file** including `.env.example`. If you need to update `.env.example`, leave it staged and tell the user to commit manually with `--no-verify` (they're aware of this). +- **Don't push without explicit user permission.** Commits are fine; pushes need approval. +- **Don't run `git rebase`, `git push --force`, or anything destructive without checking.** The branch is solo-owned but the repo's `main` is shared. + +## User communication preferences (from prior session) + +- Direct, no fluff. If something is a bad idea, say so — don't sycophant. +- When proposing changes, include trade-offs explicitly. +- For multi-question decisions, use `AskUserQuestion` rather than long bulleted lists. +- Run validation (vitest + tsc) at logical checkpoints. Don't ship a commit with regressions. +- The user prefers small focused commits over mega-commits. Within Phase 0 alone there will probably be 2-3 commits (e.g. mooring normalization, schema additions, NocoDB import script). + +## Critical rules (from plan §14) + +Eleven 🔴 critical items requiring tests before their phase ships: + +1. NocoDB mooring collisions → unique constraint + ON CONFLICT +2. Non-PDF disguised upload → magic-byte check +3. Recipient email typos → pre-send confirmation +4. XSS in email body markdown → DOMPurify + payload tests +5. SMTP credentials silently failing → loud error + failed `document_sends` row +6. Wrong-environment `CRM_PUBLIC_URL` → health-check env match +7. Mooring format drift breaking `/berths/A1` URLs → Phase 0 normalization gates Phase 3 +8. Multi-port isolation in recommender → explicit `port_id` filter + cross-port test +9. Permission escalation on SMTP creds → per-port admin only, no rep visibility +10. Filesystem backend in multi-node deployment → refuse to start; documented + health-check enforced +11. Path traversal via storage key in filesystem mode → strict regex validation + path realpath check + +## Pending items (from plan §9) + +These are non-blocking but worth knowing: + +- Sample brochure already provided (the 10.26 MB file above). +- SMTP app password for `sales@portnimara.com` — not yet obtained; expected close to production cutover. Phase 7 ships the admin UI immediately and the credential gets entered when available. +- `CRM_PUBLIC_URL` confirmed as `https://crm.portnimara.com` once live; configurable via env. +- GDPR cascade behavior for `document_sends` (delete vs. anonymize-PII vs. keep) — left `OPEN` in §14.10, default lean: anonymize-PII. Revisit when Phase 7 schema lands. + +## Scope reminder + +- **No prod data depends on the current CRM schema** — refactors don't need backwards-compatibility shims. But every schema change still ships as a Drizzle migration with `pnpm db:generate`. +- **Pluggable storage** rejects Postgres `bytea` as an option (§4.7a). The two backends are s3-compatible (MinIO/AWS/B2/R2/etc.) and local filesystem. Filesystem is single-node only. + +## What to do first + +1. Read `docs/berth-recommender-and-pdf-plan.md` end-to-end. Don't skim. The edge-case audit in §14 alone is critical context. +2. Confirm you've understood the plan by stating back the 8-phase outline and the 11 critical items, then ask the user if they want to proceed with Phase 0. +3. Once approved, create `feat/berth-recommender` and start Phase 0. + +Phase 0 deliverables (per plan): + +- One commit normalizing existing CRM mooring numbers from `A-01` → `A1` form (via `regexp_replace` migration). Delete the offending `scripts/load-berths-to-port-nimara.ts`. +- One commit adding the 5 new berth columns (`weekly_rate_high_usd`, `weekly_rate_low_usd`, `daily_rate_high_usd`, `daily_rate_low_usd`, `pricing_valid_until`, `last_imported_at`). Run `pnpm db:generate`. Verify `meta/_journal.json` prevId chain stays contiguous. +- One commit adding `scripts/import-berths-from-nocodb.ts` — the idempotent NocoDB import (handles updates, preserves CRM-side edits via `last_imported_at vs updated_at` check, `pg_advisory_lock`, dry-run flag, etc. per §4.1 and §14.1). +- Update `src/lib/db/seed-data.ts` with the imported berth set so fresh installs get them. +- Final vitest + tsc validation at the end of Phase 0. + +## Don't + +- Don't push to remote during this session (user will batch the push later). +- Don't commit `.env*` files (hook blocks them anyway). +- Don't edit `.gitignore` to exclude generated artifacts; the repo's existing ignores are correct. +- Don't add documentation files unless the plan asks for them — the plan itself is the doc. +- Don't add features not in the plan. If something seems missing, ask. +- Don't use AI for the recommender (plan §1 + §13). Pure SQL ranking. + +Once you've read the plan and confirmed understanding, ask me whether to proceed with Phase 0. diff --git a/docs/berth-recommender-and-pdf-plan.md b/docs/berth-recommender-and-pdf-plan.md new file mode 100644 index 0000000..1bcfefd --- /dev/null +++ b/docs/berth-recommender-and-pdf-plan.md @@ -0,0 +1,1086 @@ +# Berth recommender, data import, PDF management, and sales-send-out emails — comprehensive plan + +**Last updated:** 2026-05-05 (edge-case audit appended as §14) +**Owner:** Matt + Claude (this session) +**Branch:** `feat/berth-recommender` (to be created) + +This document is the single source of truth for the multi-feature push that bundles: + +1. /clients + /interests list-column fixes (the original bug report) +2. Full NocoDB Berths table import + seeding (CRM becomes the source of truth) +3. Schema refactor — many-to-many interests↔berths with role flags + desired-dimension columns +4. Berth recommender (SQL ranking, tier ladder, heat scoring, UI panel) +5. EOI bundle support — multi-berth EOIs + Documenso range-string formatter +6. Per-berth PDF management — versioned storage + reverse parser +7. Sales send-out emails — berth PDF + brochure flows +8. Public-website cutover — public site reads berths from CRM, NocoDB read path retired + +It exists primarily so that if the conversation context is compacted, the next session can pick up without losing any of the design decisions captured here. + +--- + +## 1. Confirmed design decisions + +| Decision area | Choice | +| -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Multi-berth model | Drop `interest.berthId`; everything M:M via `interest_berths` junction; one row marked `is_primary=true`. Cleaner long-term despite the larger refactor. | +| EOI bypass | Implicit-by-default + per-berth override flag. EOI signed on any berth in an interest covers all `is_in_eoi_bundle=true` berths in that interest unless one has `eoi_bypass_reason` set. | +| Multi-berth EOI generation | Documenso template gets a single merge token (e.g. `eoi_berth_range`) populated by a render function that compresses the `is_in_eoi_bundle=true` berth set into a compact string like `"A1-A10, B2-B5"`. The compact string is **only** used inside the Documenso PDF (space-constrained); CRM UI always shows the berths as individual chips. | +| Public-map "under interest" rule | A berth shows as under interest publicly when it has at least one `is_specific_interest=true` link **OR** any open interest with a paid deposit. EOI-bundle-only berths stay "available" on the public map. | +| Recommender trigger | Always-on panel on the interest detail page. Auto-populates recommendations when the interest has desired dimensions but no specific berth(s). Available on every interest, not gated by sub-status. | +| Tier ladder | Per-port admin setting controls the policy; default ladder: A=no interests, B=lost-only history, C=early-stage open interests, D=hidden when in late stage (deposit/contract/signed). Fall-throughs return berths to a higher tier per the fall-through policy. | +| Fall-through policy | Per-port admin setting: immediate-with-heat-flag (default) / cooldown / never-auto-recommend. Configurable cooldown days. | +| Heat signals | All four signals factor in: (1) fall-through recency, (2) furthest stage reached, (3) historical interest count across all clients, (4) historical EOI signature count. Per-port admin can tune weights. | +| Top N recommendations | Top 5–8 with a "show all feasible" expander. | +| Oversize cap | Max 30% larger than desired dimensions, configurable per port. | +| Add-recommendation UI | Dialog at click time: "Pitching specifically" vs "Just exploring". Linked berths in the interest detail get a persistent toggle to switch between sub-statuses, with a clear visual indicator of the public-map consequence ("This berth will appear as under interest on the public map" / "This berth is hidden from the public map"). | +| Amenity matching | Sales rep adds amenity filters per-interest in the recommender panel after speaking with the client. Amenities are not part of the public form. | +| Explainability | Cards collapsed by default; expand-per-card to see tier, size buffer, amenity match reasoning. Progressive disclosure. | +| Berth read path | Cut over: website calls a new `/api/public/berths` endpoint on the CRM. NocoDB read path retired. | +| Per-berth PDF storage | Full versioning. Every upload creates a new version. Current = latest. Roll back to any prior version. | +| PDF↔DB conflict resolution | On upload: parse the PDF; auto-fill any CRM fields that are currently null; for fields where both have values that disagree, show a diff dialog requiring rep confirmation per field. Audit-log every accepted change. | +| PDF parsing approach | 3-tier fallback: (1) AcroForm field reads if the PDF has named form fields, (2) OCR with positional heuristics for flat text PDFs, (3) AI parse as last-resort fallback when OCR confidence is low. AI is opt-in / suggested only — not the primary path. | +| PDF generation direction | External uploads only. CRM does not auto-regenerate per-berth PDFs from a template. | +| Brochure model | Multiple labeled brochures per port + a "default" marker for fast-send. Versioned similarly to per-berth PDFs. | +| Send-button locations | All three: interest detail (per-berth send + bulk + brochure), berth detail (send to any client via recipient picker), client detail (multi-berth bulk + brochure). | +| Send tracking | Full audit: every send creates a timeline entry on both client and interest, including recipient, who sent, timestamp, exact PDF version, custom body text used. | +| Email layout/wording editability | Layout/styling shell is locked (background blur gradient + logo + white box). Body Markdown inside the white box is admin-editable with merge fields (`{{client_name}}`, `{{berth_mooring}}`, `{{port_name}}`, etc.). Reps get a per-send override input above the body for one-off customization. | +| From-address | Configurable per-port: `sales_from_address` and `noreply_from_address` system*settings, with `sales_smtp*\*`credentials encrypted at rest. Defaults:`sales@portnimara.com`for sales-initiated sends,`noreply@portnimara.com` for transactional automation. Future swap to OAuth-per-rep is a config change, not a code change. | + +--- + +## 2. Phase & commit plan (one feature branch, multiple commits) + +**Branch:** `feat/berth-recommender` + +``` +feat: full NocoDB berth import + seed [Phase 0] +fix(clients): list contacts + addresses join + col redesign [Phase 1] +fix(interests): list yacht + desired dims + col redesign [Phase 1] +feat(db): m:m interest_berths junction + role flags [Phase 2] +feat(db): desired dimensions on interests + backfill [Phase 2] +feat(berths): public berths API endpoint [Phase 3] +fix(website): swap getBerths to call CRM endpoint [Phase 3 - website repo] +feat(recommender): SQL ranking + tier ladder + heat [Phase 4] +feat(recommender): UI panel + add-to-interest dialog [Phase 4] +feat(eoi): multi-berth EOI generation + range formatter [Phase 5] +feat(storage): pluggable S3-or-filesystem backend [Phase 6a] +feat(storage): migration CLI + admin UI wrapper [Phase 6a] +feat(berths): per-berth PDF storage (versioned) [Phase 6b] +feat(berths): PDF reverse parser (AcroForm/OCR/AI) [Phase 6b] +feat(emails): send-berth-PDF flow + send-brochure flow [Phase 7] +feat(admin): brochures management UI + send-from settings [Phase 7] +chore: update CLAUDE.md with new conventions [Phase 8] +``` + +Each phase is independently testable. Phases 1, 2, 3 don't depend on the recommender so they're safe early wins. Phase 4 depends on Phases 0+2. + +--- + +## 3. Schema changes + +### 3.1 New tables + +```ts +// src/lib/db/schema/interests.ts + +export const interestBerths = pgTable( + 'interest_berths', + { + id: text('id') + .primaryKey() + .$defaultFn(() => crypto.randomUUID()), + interestId: text('interest_id') + .notNull() + .references(() => interests.id, { onDelete: 'cascade' }), + berthId: text('berth_id') + .notNull() + .references(() => berths.id, { onDelete: 'restrict' }), + /** One row per interest is the primary; used in templates / forms / "the berth for this deal" semantics. */ + isPrimary: boolean('is_primary').notNull().default(false), + /** True = berth shows as "under interest" on the public map. False = legal/EOI-only. */ + isSpecificInterest: boolean('is_specific_interest').notNull().default(true), + /** True = covered by the EOI bundle for this interest. */ + isInEoiBundle: boolean('is_in_eoi_bundle').notNull().default(false), + /** Set when EOI is explicitly waived for this berth even though the interest's primary EOI is signed. */ + eoiBypassReason: text('eoi_bypass_reason'), + eoiBypassedBy: text('eoi_bypassed_by'), // user id + eoiBypassedAt: timestamp('eoi_bypassed_at', { withTimezone: true }), + addedBy: text('added_by'), // user id + addedAt: timestamp('added_at', { withTimezone: true }).notNull().defaultNow(), + notes: text('notes'), + }, + (t) => [ + uniqueIndex('idx_ib_interest_berth').on(t.interestId, t.berthId), + // Only one primary per interest + uniqueIndex('idx_ib_one_primary') + .on(t.interestId) + .where(sql`${t.isPrimary} = true`), + index('idx_ib_berth').on(t.berthId), + index('idx_ib_specific') + .on(t.berthId) + .where(sql`${t.isSpecificInterest} = true`), + ], +); +``` + +### 3.2 Column additions + +```sql +-- Interests gain desired-dimensions for the recommender +ALTER TABLE interests + ADD COLUMN desired_length_ft numeric, + ADD COLUMN desired_width_ft numeric, + ADD COLUMN desired_draft_ft numeric; + +-- Berths gain pricing fields revealed by the sample PDF, plus a pointer to the current PDF version +-- (full version history lives in berth_pdf_versions). +ALTER TABLE berths + ADD COLUMN weekly_rate_high_usd numeric, + ADD COLUMN weekly_rate_low_usd numeric, + ADD COLUMN daily_rate_high_usd numeric, + ADD COLUMN daily_rate_low_usd numeric, + ADD COLUMN pricing_valid_until date, + ADD COLUMN last_imported_at timestamp with time zone, -- set by NocoDB import script + ADD COLUMN current_pdf_version_id text REFERENCES berth_pdf_versions(id); +``` + +Notes: + +- The 4 rate columns + `pricing_valid_until` come from the per-berth PDF, not NocoDB. After the Phase 0 NocoDB import these stay null until reps upload PDFs in Phase 6. +- The `last_imported_at` column lets the NocoDB import script implement "do not overwrite if user has manually edited since import" — compare against `updated_at`. +- The pricing-validity date powers a "Pricing data may be stale" warning chip on the berth detail page when `pricing_valid_until < today()`. + +### 3.3 More new tables + +```ts +// Per-berth PDF version history +export const berthPdfVersions = pgTable('berth_pdf_versions', { + id: text('id') + .primaryKey() + .$defaultFn(() => crypto.randomUUID()), + berthId: text('berth_id') + .notNull() + .references(() => berths.id, { onDelete: 'cascade' }), + versionNumber: integer('version_number').notNull(), // 1, 2, 3... + s3Key: text('s3_key').notNull(), + fileName: text('file_name').notNull(), + fileSizeBytes: integer('file_size_bytes').notNull(), + contentSha256: text('content_sha256').notNull(), + uploadedBy: text('uploaded_by').notNull(), // user id + uploadedAt: timestamp('uploaded_at', { withTimezone: true }).notNull().defaultNow(), + /** Diffs accepted on this upload, captured for audit */ + parseResults: jsonb('parse_results'), // { engine: 'acroform'|'ocr'|'ai', extracted: {...}, conflicts: [...], appliedFields: [...] } +}); + +// Port-wide brochures (multiple per port + a default marker) +export const brochures = pgTable('brochures', { + id: text('id') + .primaryKey() + .$defaultFn(() => crypto.randomUUID()), + portId: text('port_id') + .notNull() + .references(() => ports.id, { onDelete: 'cascade' }), + label: text('label').notNull(), // 'General', 'Investor Pack', etc. + description: text('description'), + isDefault: boolean('is_default').notNull().default(false), + archivedAt: timestamp('archived_at', { withTimezone: true }), + createdBy: text('created_by').notNull(), + createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), +}); + +export const brochureVersions = pgTable('brochure_versions', { + id: text('id') + .primaryKey() + .$defaultFn(() => crypto.randomUUID()), + brochureId: text('brochure_id') + .notNull() + .references(() => brochures.id, { onDelete: 'cascade' }), + versionNumber: integer('version_number').notNull(), + s3Key: text('s3_key').notNull(), + fileName: text('file_name').notNull(), + fileSizeBytes: integer('file_size_bytes').notNull(), + contentSha256: text('content_sha256').notNull(), + uploadedBy: text('uploaded_by').notNull(), + uploadedAt: timestamp('uploaded_at', { withTimezone: true }).notNull().defaultNow(), +}); + +// Send-out audit log for berth PDFs and brochures +export const documentSends = pgTable( + 'document_sends', + { + id: text('id') + .primaryKey() + .$defaultFn(() => crypto.randomUUID()), + portId: text('port_id') + .notNull() + .references(() => ports.id), + /** Either client_id or interest_id is set (or both) */ + clientId: text('client_id').references(() => clients.id), + interestId: text('interest_id').references(() => interests.id), + recipientEmail: text('recipient_email').notNull(), + documentKind: text('document_kind').notNull(), // 'berth_pdf' | 'brochure' + berthId: text('berth_id').references(() => berths.id), // when documentKind='berth_pdf' + berthPdfVersionId: text('berth_pdf_version_id').references(() => berthPdfVersions.id), + brochureId: text('brochure_id').references(() => brochures.id), // when documentKind='brochure' + brochureVersionId: text('brochure_version_id').references(() => brochureVersions.id), + bodyMarkdown: text('body_markdown'), // exact body used (after merge-field expansion) + sentByUserId: text('sent_by_user_id').notNull(), + fromAddress: text('from_address').notNull(), // resolved sales@ or per-rep + sentAt: timestamp('sent_at', { withTimezone: true }).notNull().defaultNow(), + /** SMTP provider message-id for deliverability tracking */ + messageId: text('message_id'), + /** When the initial send had its attachment dropped because the SMTP server + * rejected the size (552 etc.) and the system retried with a download + * link, this captures the rejection reason for ops visibility. Null when + * the original send went through as-is. */ + fallbackToLinkReason: text('fallback_to_link_reason'), + }, + (t) => [ + index('idx_ds_client').on(t.clientId, t.sentAt), + index('idx_ds_interest').on(t.interestId, t.sentAt), + index('idx_ds_berth').on(t.berthId, t.sentAt), + ], +); +``` + +### 3.4 Removals / renames + +```sql +-- After migrating existing interest.berthId values into interest_berths, drop the column. +-- This is an irreversible schema change so it goes in a separate migration AFTER all callers are updated. +ALTER TABLE interests DROP COLUMN berth_id; +``` + +--- + +## 4. Service layer + +### 4.1 NocoDB berth import (Phase 0) + +`scripts/import-berths-from-nocodb.ts`: + +- Fetches all rows from NocoDB Berths (table id `mczgos9hr3oa9qc`). +- Maps every column to the corresponding new-CRM `berths` column. The new schema already has every NocoDB column. +- Upserts by `mooring_number` + `port_id`. Existing rows get updated (preserving CRM-side overrides via a "do not overwrite if user has manually edited since import" guard, tracked via a `last_imported_at` column we add to berths in this same phase). +- Reproducible: re-running the script picks up NocoDB additions/edits without clobbering CRM-side changes. +- Also feeds `src/lib/db/seed-data.ts` so fresh installs get the data. + +### 4.2 listClients fix (Phase 1) + +`src/lib/services/clients.service.ts` — add joins: + +- `clientContacts` (primary contact only, ordered by `is_primary desc, created_at desc`). +- `clientAddresses` (primary address only). +- One backfill SQL that sets `clients.nationality_iso = (subquery: primary phone's value_country)` where `nationality_iso IS NULL`. + +New `ClientRow` shape includes `primaryEmail`, `primaryPhone`, `countryIso`, `latestInterest { stage, mooringNumber }`. + +### 4.3 listInterests fix (Phase 1) + +`src/lib/services/interests.service.ts` — add joins: + +- `yachts` for the linked yacht name. +- New columns `desired_length_ft / width_ft / draft_ft` rendered as a compact "60×18×6 ft" string in a `Berth size desired` column. + +### 4.4 Recommender (Phase 4) + +`src/lib/services/berth-recommender.service.ts`: + +```ts +type RecommendBerthsArgs = { + interestId: string; + portId: string; + // Optional rep-supplied filters + amenityFilters?: { + minPowerCapacityKw?: number; + requiredVoltage?: number; + requiredAccess?: string; + requiredMooringType?: string; + requiredCleatCapacity?: string; + }; +}; + +type Recommendation = { + berthId: string; + mooringNumber: string; + tier: 'A' | 'B' | 'C' | 'D'; + fitScore: number; // 0-100 + sizeBufferPct: number; // how much larger than desired + heatScore: number; // 0-100, only relevant for fall-throughs + reasons: { + dimensional: string; + pipeline: string; + amenities?: string; + heat?: string; + }; + // Display data + lengthFt: number; + widthFt: number; + draftFt: number; + status: string; + amenities: { + /* power, voltage, access, mooring_type, etc. */ + }; +}; +``` + +Algorithm (single SQL CTE chain, ~50 lines): + +1. **Feasible set** — berths where `length_ft >= desired_length`, `width_ft >= desired_width`, `draft_ft >= desired_draft`, and not exceeding the per-port max-oversize-pct cap. +2. **Apply amenity filters** — hard-filter by required amenities. +3. **Tier classification** — left join `interest_berths` aggregates: count of active interests, max stage reached, count of EOI signatures, latest fall-through date. Apply the per-port tier ladder. +4. **Heat score** — for berths that have fall-through history, compute a heat score using the per-port heat weights. +5. **Fit score** — combination of `1 / size_buffer_pct` (closer fit better), tier rank, heat (higher = boosted in late tiers). +6. **Sort + top-N** — sort by tier asc, fit score desc; return top 8 or all-feasible per panel mode. + +### 4.5 Public berths API (Phase 3) + +`src/app/api/public/berths/route.ts`: + +- GET endpoint, no auth (public-facing). +- 5-minute response cache (matching the existing website's behavior). +- Returns the same shape NocoDB returned to the website (so the website's getBerths() swap is a one-line change to the URL + auth header). +- Filters out berths archived in CRM. +- Status mapping: `under_offer` ↔ at-least-one `is_specific_interest=true` link OR paid deposit; `sold` ↔ `status='sold'`; else `available`. + +Website-side change (separate repo `Port Nimara/Website`), env-configurable: + +```ts +// server/utils/berths.ts +const CRM_PUBLIC_URL = process.env.CRM_PUBLIC_URL; +// Production: https://crm.portnimara.com +// Staging: https://crm-staging.portnimara.com +// Dev: http://localhost:3000 + +if (!CRM_PUBLIC_URL) throw new Error('CRM_PUBLIC_URL must be set'); + +export const getBerths = () => + $fetch>(`${CRM_PUBLIC_URL}/api/public/berths`); +``` + +`.env.example` on the website repo gains a `CRM_PUBLIC_URL=` line. The previous NocoDB read path is deleted in the same commit. Once the website is in production reading from the CRM, the NocoDB Berths table can be retired. + +### 4.6 EOI bundle range formatter (Phase 5) + +`src/lib/templates/berth-range.ts`: + +```ts +/** + * Compresses a list of mooring numbers like ['A1', 'A2', 'A3', 'B2', 'B3'] + * into 'A1-A3, B2-B3'. Single-berth runs become bare ('A5', not 'A5-A5'). + * Used only by the Documenso EOI template merge — UI elsewhere shows + * berths as individual chips. + */ +export function formatBerthRange(mooringNumbers: string[]): string; +``` + +Unit-tested against: + +- `[]` → `""` +- `['A5']` → `"A5"` +- `['A1', 'A2', 'A3']` → `"A1-A3"` +- `['A1', 'A3']` → `"A1, A3"` +- `['A1', 'A2', 'B5', 'B6', 'B7']` → `"A1-A2, B5-B7"` +- Mixed letter-prefixes, non-numeric suffixes, etc. + +Documenso payload merge: `eoi_berth_range` token populated via this formatter, included in `src/lib/templates/merge-fields.ts` allow-list. + +### 4.7a Pluggable storage backend (Phase 6a) + +The CRM stores files (per-berth PDFs, brochures, GDPR exports, etc.) through a single abstraction so the deployment can choose between S3-compatible (MinIO/AWS S3/Backblaze B2/Cloudflare R2/Wasabi/Tigris) and local filesystem at runtime. + +**Why not "MinIO off → into Postgres"** (the originally-asked option): a 20MB brochure stored as Postgres `bytea` bloats the WAL, balloons backups, can't be streamed by `postgres-js`, and can't be served via CDN. The pluggable filesystem option achieves the "no MinIO required" goal without any of those penalties. + +**Storage interface** (`src/lib/storage/index.ts`): + +```ts +export interface StorageBackend { + /** Upload a stream/buffer to the backend. Returns the storage key. */ + put( + key: string, + body: Buffer | NodeJS.ReadableStream, + opts: PutOpts, + ): Promise<{ key: string; sizeBytes: number; sha256: string }>; + /** Stream a file out. Throws NotFoundError if missing. */ + get(key: string): Promise; + /** HEAD-equivalent: existence + size check without reading body. */ + head(key: string): Promise<{ sizeBytes: number; contentType: string } | null>; + /** Delete. Idempotent. */ + delete(key: string): Promise; + /** Generate a short-lived URL for the browser to upload to (S3 only; filesystem backend returns a CRM-internal URL that proxies the upload). */ + presignUpload(key: string, opts: PresignOpts): Promise<{ url: string; method: 'PUT' | 'POST' }>; + /** Generate a short-lived URL for downloads (S3 = signed URL; filesystem = CRM-internal proxy URL with HMAC token). */ + presignDownload(key: string, opts: PresignOpts): Promise<{ url: string; expiresAt: Date }>; + /** Backend-specific identifier for telemetry/admin display. */ + readonly name: 's3' | 'filesystem'; +} +``` + +**Two implementations:** + +| Backend | Use case | Notes | +| ------------------- | --------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `S3Backend` | MinIO, AWS S3, B2, R2, Wasabi, Tigris (any S3-compatible) | Reuses existing `src/lib/minio/index.ts` code. Configured via `system_settings`: `storage_s3_endpoint`, `_region`, `_bucket`, `_access_key`, `_secret_key_encrypted`, `_force_path_style`. | +| `FilesystemBackend` | Single-VPS deployments, dev | Stores files at `${STORAGE_FILESYSTEM_ROOT}/`. Default root: `./storage` (gitignored). Presigned URLs are CRM-internal `/api/storage/[token]` routes that verify HMAC + serve the file inline. | + +**Factory function** (`src/lib/storage/index.ts`): + +```ts +export async function getStorageBackend(): Promise { + const setting = await getSystemSetting('storage_backend'); // 's3' | 'filesystem' + if (setting === 'filesystem') return new FilesystemBackend(/*...config*/); + return new S3Backend(/*...config*/); // default +} +``` + +The factory is cached per-process for the duration of the request; settings changes invalidate via the existing `system_settings` cache invalidation path (Redis pub/sub). + +**Migration command** (`scripts/migrate-storage.ts`): + +```bash +# Dry run — reports which files would be moved + total bytes, doesn't transfer +pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem --dry-run + +# Actual migration +pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem +pnpm tsx scripts/migrate-storage.ts --from filesystem --to s3 +``` + +The script: + +1. Locks via `pg_advisory_lock(STORAGE_MIGRATION_LOCK_KEY)` so two runs can't conflict. +2. Walks every file-referencing table (`berth_pdf_versions.s3_key`, `brochure_versions.s3_key`, anywhere else MinIO is used today). +3. For each row: streams the file from source backend → uploads to target backend with the same key. Verifies sha256 matches afterward. +4. Once all files are confirmed in the target, updates `system_settings.storage_backend = ''` atomically. +5. Old backend is left intact for 24h so a rollback is trivial; daily orphan-cleanup worker eventually removes after confirmation. +6. Resumable — re-running picks up where it left off via per-row `migrated_at` markers in a temp table. + +**Admin UI** (`/[portSlug]/admin/storage`, super_admin only): + +- Current backend display + capacity stats (file count, total bytes, oldest file) +- "Switch backend" button → confirmation modal with dry-run output (count + bytes to transfer + estimated time) +- Migration runs as a background job; UI polls progress (Socket.IO event) +- Post-migration banner: "Switched from S3 to Filesystem. Old S3 bucket retained for 24h; click here to clean up early." +- The button is a thin wrapper around the CLI script — same code path, different trigger. + +**Schema changes:** + +```ts +// All existing file-reference columns are already named *_s3_key but that's +// a misnomer once filesystem mode exists. Rename in this phase: +// berth_pdf_versions.s3_key -> berth_pdf_versions.storage_key +// brochure_versions.s3_key -> brochure_versions.storage_key +// (no production data, so a one-shot ALTER TABLE rename is safe per §11.2) +``` + +### 4.7b PDF management (Phase 6b) + +`src/lib/services/berth-pdf.service.ts`: + +- `uploadBerthPdf(berthId, file, parseResult)` — stores file via the active `StorageBackend` at key `berths/{berthId}/v{n}/{filename}`, increments version counter, sets `berths.current_pdf_version_id`. +- `parseBerthPdf(buffer)` — 3-tier: + 1. Try AcroForm read via `pdf-lib`. If named fields exist (`length_ft`, `mooring_number`, etc.), return extracted values + `engine: 'acroform'`. + 2. Else fall back to OCR via Tesseract.js (already in the codebase). Use positional heuristics keyed off label patterns ("Length: 82ft", "Mooring: A12"). Return `engine: 'ocr'`, with confidence per field. + 3. If OCR confidence is below a threshold for too many fields, surface an "AI parse" button to the rep that calls the OpenAI extraction path (already optionally configured via `OPENAI_API_KEY`). Return `engine: 'ai'`. +- `reconcilePdfWithBerth(berthId, parsedFields)` — for each parsed field: if CRM is null, auto-set; if CRM and PDF disagree on a non-null value, return as a `conflict` for the diff dialog. + +### 4.8 Sales send-outs (Phase 7) + +`src/lib/services/document-sends.service.ts`: + +- `sendBerthPdf({ berthId, recipient: { clientId|email, interestId? }, customBodyMarkdown? })` — resolves the rep's body template (per-port editable default) + per-send override, runs merge expansion, attaches the berth's current PDF version, sends from `sales_from_address`, logs to `document_sends`. +- `sendBrochure({ brochureId?, recipient, customBodyMarkdown? })` — similar, defaults to the port's default brochure when `brochureId` omitted. +- Email send goes through the existing `email/index.ts` infrastructure with a new `sender_account` parameter that picks between `noreply` and `sales` SMTP credentials. + +Rate-limited: each user can send at most N brochures + M berth PDFs per hour to prevent accidental mass-blasts. + +### 4.9 Per-port settings (Phase 7) + +New keys in `system_settings`: + +| Key | Type | Default | Notes | +| ------------------------------------ | ---- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ | +| `recommender_max_oversize_pct` | int | 30 | | +| `recommender_top_n_default` | int | 8 | | +| `fallthrough_policy` | text | `'immediate_with_heat'` | One of `immediate_with_heat`, `cooldown`, `never_auto_recommend` | +| `fallthrough_cooldown_days` | int | 30 | Used when policy=cooldown | +| `heat_weight_recency` | int | 30 | 0-100 | +| `heat_weight_furthest_stage` | int | 40 | | +| `heat_weight_interest_count` | int | 15 | | +| `heat_weight_eoi_count` | int | 15 | | +| `tier_ladder_hide_late_stage` | bool | true | If false, late-stage berths show as Tier D dimmed | +| `sales_from_address` | text | `sales@portnimara.com` | | +| `sales_smtp_host` | text | — | Provider-agnostic: e.g. `smtp.gmail.com` (Workspace) or `smtp.office365.com` (M365). | +| `sales_smtp_port` | int | 587 | 465 for SSL, 587 for STARTTLS. | +| `sales_smtp_secure` | bool | false | true=SSL on 465, false=STARTTLS on 587. | +| `sales_smtp_user` | text | — | Usually the same as `sales_from_address`. | +| `sales_smtp_pass_encrypted` | text | — | App password from the provider, encrypted at rest. | +| `sales_imap_host` | text | — | Required for bounce monitoring. e.g. `imap.gmail.com` or `outlook.office365.com`. | +| `sales_imap_port` | int | 993 | Standard IMAPS for both providers. | +| `sales_imap_user` | text | — | Usually same as SMTP user; can be a dedicated bounce-handler mailbox. | +| `sales_imap_pass_encrypted` | text | — | App password, encrypted at rest. | +| `sales_auth_method` | text | `app_password` | Future: `oauth_google` / `oauth_microsoft` when OAuth migration happens. Out of scope for this branch. | +| `noreply_from_address` | text | `noreply@portnimara.com` | Existing | +| `email_template_send_berth_pdf_body` | text | (default markdown) | Admin-editable per-port | +| `email_template_send_brochure_body` | text | (default markdown) | Admin-editable per-port | +| `brochure_max_upload_mb` | int | 50 | Per-port cap on brochure file size | +| `berth_pdf_max_upload_mb` | int | 15 | Per-port cap on per-berth PDF file size | +| `email_attach_threshold_mb` | int | 15 | Files larger than this go as a download link instead of attachment. Auto-fallback to link still applies on SMTP size-rejection regardless. | + +**Global settings** (not per-port — applies system-wide, super_admin only): + +| Key | Type | Default | Notes | +| ------------------------------------- | ---- | ----------- | ------------------------------------------------------------------------------------------------- | +| `storage_backend` | text | `'s3'` | One of `'s3'` (MinIO/AWS/B2/R2/etc.) or `'filesystem'`. Drives the `getStorageBackend()` factory. | +| `storage_s3_endpoint` | text | — | e.g. `https://s3.amazonaws.com`, `http://minio:9000`, `https://s3.us-west-002.backblazeb2.com` | +| `storage_s3_region` | text | `us-east-1` | | +| `storage_s3_bucket` | text | — | | +| `storage_s3_access_key` | text | — | Encrypted at rest. | +| `storage_s3_secret_key_encrypted` | text | — | Encrypted at rest. | +| `storage_s3_force_path_style` | bool | true | Required for MinIO; false for AWS S3. | +| `storage_filesystem_root` | text | `./storage` | Path on disk; relative paths resolve from process cwd. | +| `storage_proxy_hmac_secret_encrypted` | text | — | Encrypted. Used to sign filesystem proxy download URLs. | + +--- + +## 5. UI components + +### 5.1 /clients column redesign (Phase 1) + +New columns: **Name · Email · Phone · Country · Source · Latest stage · Created**. Drop unused Yachts/Companies/Tags from the default view; they remain available via the column-picker in saved views. + +### 5.2 /interests column redesign (Phase 1) + +New columns: **Client · Yacht · Berth (primary mooring) · Berth size desired · Stage · EOI status · Source · Last activity**. Drop the Category column. + +### 5.3 Recommender panel (Phase 4) + +`src/components/interests/berth-recommender-panel.tsx`: + +- Always-visible card on the interest detail page. +- Header: "Recommendations for {interest desired dims}" + refresh button + "Add filters" toggle (opens amenity filters input). +- Body: top-8 recommendations as cards, each with mooring number, dimensions, status pill, tier label, heat indicator chip if relevant. +- Cards collapsed by default; click to expand reasoning (size buffer %, pipeline tier, amenity matches, heat score breakdown). +- Each card has an "Add to interest" primary button (opens the role-picker dialog) + "View berth" secondary link. +- "Show all feasible" expander at the bottom. + +### 5.4 Add-to-interest dialog (Phase 4) + +`src/components/interests/add-berth-to-interest-dialog.tsx`: + +- Two large radio cards: "Pitching specifically" (sets `is_specific_interest=true`) vs "Just exploring" (sets `is_specific_interest=false`). +- Below each: clear consequence text — "This berth will appear as under interest on the public map" / "This berth is hidden from the public map". +- Submit button: "Add berth to interest". + +### 5.5 Linked-berths list (Phase 4) + +In the interest detail's "Berths" tab: + +- Each linked berth row has a toggle/icon-button group showing current sub-status with the same consequence text. +- "Set as primary" action when not primary. +- "Mark in EOI bundle" toggle. +- "Bypass EOI for this berth" with reason textarea (only visible if interest has a signed primary EOI). + +### 5.6 Berth detail page (Phase 6, 7) + +- New "Documents" tab with the per-berth PDF section: current version preview, version history list, "Replace PDF" upload button. +- Upload triggers the parse → reconcile diff flow. +- "Send to client" button (Phase 7) opens a recipient picker + body composer. + +### 5.7 Client detail page (Phase 7) + +- New "Send documents" action in the client header: opens a multi-step flow where rep picks (a) which berth PDFs (or none), (b) which brochure (or none), (c) custom body. + +### 5.8 Admin: brochures management (Phase 7) + +- New `/[portSlug]/admin/brochures` page: list of brochures, upload new, mark default, archive, version history. +- Per-port admin scope. + +### 5.9 Admin: send-from settings (Phase 7) + +- Section in `/[portSlug]/admin/email`: configure `sales_from_address` + SMTP credentials, `noreply_from_address`. Test-send button. Body-template editors for `email_template_send_berth_pdf_body` and `email_template_send_brochure_body`. + +### 5.10 Admin: storage backend (Phase 6a) + +- New `/admin/storage` page (super_admin only — not per-port): + - **Current backend** card: shows whether `s3` or `filesystem` is active; for s3 shows endpoint/bucket; for filesystem shows the resolved absolute root path. + - **Capacity stats**: file count, total bytes, oldest file timestamp. + - **Switch backend** button: opens confirmation modal with dry-run output (count + bytes to transfer + estimated time). Migration runs as a background job; UI polls progress via Socket.IO. + - **Test connection** button (s3 mode only): attempts a list/put/get/delete on a sentinel object; surfaces errors immediately. + - **Backup hint** sidebar: contextual notes — s3 mode says "Configure your provider's lifecycle/replication"; filesystem mode says "Include `${root}` in your backup tooling." +- Wraps `pnpm tsx scripts/migrate-storage.ts`; no UI logic that the CLI doesn't also support. + +--- + +## 6. Data migration / backfill + +| Backfill step | Source | Target | Notes | +| --------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | +| Berth data | NocoDB `mczgos9hr3oa9qc` | `berths` table | Phase 0; idempotent script. | +| `interest_berths` | `interests.berth_id` (existing nullable col) | `interest_berths` rows with `is_primary=true, is_specific_interest=true, is_in_eoi_bundle=(eoi_status='signed')` | Phase 2; one-shot SQL. | +| `clients.nationality_iso` | Primary phone's `client_contacts.value_country` | `clients.nationality_iso` | Phase 1; ~210 rows. | +| `interests.desired_length_ft / width_ft / draft_ft` | NocoDB `Length`, `Width`, `Depth` text fields parsed numerically | New numeric columns | Phase 2; via `migration-transform.ts`. | +| Berth PDFs | Existing PDFs from the old NocoDB (none stored — these come from external designers via fresh upload) | `berth_pdf_versions` | Manual; reps upload fresh PDFs over time. | +| Brochures | Existing brochure files | `brochures` + `brochure_versions` | Manual; admin uploads in the new admin UI. | + +--- + +## 7. Cross-repo work + +### 7.1 Mooring-number canonical format — `"A1"`, not `"A-01"` + +**Discovered drift (2026-05-05):** the CRM currently stores mooring numbers as `"A-01" .. "E-18"` (hyphen + zero-padded). NocoDB, the public website, the per-berth PDFs, and every external reference use `"A1" .. "E18"` (no hyphen, no padding). The old `scripts/load-berths-to-port-nimara.ts` introduced the wrong format when it seeded the CRM from a hand-rolled snapshot. + +**Canonical rule going forward:** + +- Letter prefix immediately followed by digits, no hyphen, no leading zeros: `A1`, `A11`, `B5`, `E18`. +- The CRM stores this exact form. The website displays this exact form. The Documenso EOI templates render this exact form. Search/lookup is exact-match on this form. +- Multi-letter prefixes are theoretically possible (e.g. `AA1` if the port ever expands) — the validation regex accepts `^[A-Z]+\d+$`. + +**One-shot backfill SQL** (Phase 0, runs before the NocoDB import re-imports data): + +```sql +UPDATE berths +SET mooring_number = regexp_replace(mooring_number, '^([A-Z]+)-0*(\d+)$', '\1\2') +WHERE mooring_number ~ '^[A-Z]+-0*\d+$'; +``` + +**Code to update or delete:** + +- `scripts/load-berths-to-port-nimara.ts` — DELETE in Phase 0 (superseded by the NocoDB import script). +- Any tests / fixtures / hard-coded references in the codebase using `'A-01'` style — grep + fix during Phase 0. +- The new NocoDB import script preserves `"Mooring Number"` verbatim — no transformation. + +### 7.2 Website-side cutover + +The public website currently reads three things from NocoDB Berths and writes one. After cutover: + +| Website file | Current behavior | After cutover | +| ---------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `server/utils/berths.ts` → `getBerths()` | Lists all berths from NocoDB table `mczgos9hr3oa9qc` | Calls `${CRM_PUBLIC_URL}/api/public/berths` | +| `server/utils/berths.ts` → `getBerthByMooringNumber(num)` | Fetches one berth by `Mooring Number` from NocoDB | Calls `${CRM_PUBLIC_URL}/api/public/berths/${num}` | +| `server/utils/berths.ts` → `linkWebsiteInterestToBerth(berthId, interestId)` | Writes a NocoDB M:M link between berth and website-interest row | Removed — interest↔berth association already happens via the CRM website-intake dual-write (`submission_id` + `port_slug` carry the linkage; the CRM creates the proper `interest_berths` row) | +| `server/utils/email.ts` line 92 | Comment says `// e.g. "Berth A-12"` | Update comment to `// e.g. "Berth A12"` (cosmetic correctness) | +| `server/api/berths.ts` | `defineEventHandler(async () => (await getBerths()).list)` — public berth list endpoint serving the public berth-map | No change to handler; it just calls the updated `getBerths()` | +| `server/api/berth.ts` | `defineEventHandler((event) => getBerthByMooringNumber(getQuery(event).number as string))` — single berth lookup | No change to handler | +| `server/api/register.ts` | Calls `getBerthByMooringNumber` + `linkWebsiteInterestToBerth` after NocoDB write | Drop the `linkWebsiteInterestToBerth` call (CRM handles linkage); keep `getBerthByMooringNumber` as a sanity-check (returns 200 with warning when berth not found) | + +**Vue components that consume the Berth shape (no change needed if shape is preserved):** + +- `components/pn/specific/website/berths/map/item.vue` — reads `berth["Mooring Number"]` for the map label +- `components/pn/specific/website/berths/filter/results.vue` — table cell + nuxt-link to `/berths/${mooring}` +- `components/pn/specific/website/berths-item/introduction.vue` — page title for `/berths/[number]` +- `components/pn/specific/website/berths-item/form.vue` — passes the mooring number through to `register.ts` as the `berth` field +- `components/pn/specific/website/supplement-eoi/form.vue` — maps to mooring numbers for the supplemental form + +### 7.3 CRM public endpoint — shape contract + +The CRM `/api/public/berths` and `/api/public/berths/[mooringNumber]` endpoints **must** return the same JSON shape NocoDB returned, so the website's `Berth` type (`utils/data.ts`) and all Vue templates work without modification. Field names use the NocoDB-style capital-letter quoted keys: + +```ts +interface Berth { + Id: number; // CRM uses string UUIDs internally; expose a stable numeric id derived from import order, OR migrate the website's Berth.Id to string + 'Mooring Number': string; + Length: number; // ft + Draft: number; // ft (boat draught requirement) + 'Side Pontoon': string; + 'Power Capacity': number; // kW + Voltage: number; // V + Status: 'Available' | 'Under Offer' | 'Sold'; // mapped from CRM internal status + Width: number; // ft + Area: string; + 'Mooring Type': string; + 'Bow Facing': string; + 'Cleat Type': string; + 'Cleat Capacity': string; + 'Bollard Type': string; + 'Bollard Capacity': string; + 'Nominal Boat Size': number; + Access: string; + 'Map Data'?: { path: string; x: string; y: string; transform: string; fontSize: string }; +} +``` + +**Two open implementation choices** for the CRM endpoint to resolve when building Phase 3: + +1. **`Id` field**: NocoDB used numeric ids (1, 2, 3...). CRM uses UUIDs. The CRM endpoint could either (a) expose the `id` UUID as a string and update the website's `Berth.Id` to `string`, or (b) maintain a stable `legacy_nocodb_id` on each berth from the import and surface that as the numeric `Id` for backwards compatibility. **Recommendation: (a)** — string UUIDs align with the CRM's internal model and the website only uses `Id` for React-keys / URL building, both of which work fine with strings. +2. **`Status` mapping**: CRM internal status enum is `available / under_offer / sold` (snake_case). The endpoint translates to NocoDB's display form (`Available / Under Offer / Sold`). + +### 7.4 Website env additions + +`Port Nimara/Website/.env.example` gains: + +``` +# CRM endpoint for live berth data. The website fetches berths from this URL +# instead of reading directly from NocoDB. Production: https://crm.portnimara.com +CRM_PUBLIC_URL= +``` + +### 7.5 Sequencing + +1. **Phase 0**: Mooring-number normalization SQL runs in CRM. Old `load-berths-to-port-nimara.ts` deleted. NocoDB import script imports remaining data. +2. **Phase 3**: CRM `/api/public/berths` + `/api/public/berths/[mooringNumber]` ship and are verified live (with the right shape) before any website change. +3. **Phase 3 (website repo, separate PR)**: `getBerths()` + `getBerthByMooringNumber()` swap to call CRM. `linkWebsiteInterestToBerth()` deleted along with its call site in `register.ts`. Comment fix in `email.ts`. `CRM_PUBLIC_URL` env var added. +4. **Post-cutover**: NocoDB Berths table can be retired (separate cleanup commit, after running parallel for a couple weeks to confirm). + +No website push happens until the CRM endpoint is live and the response shape is verified field-by-field against the live NocoDB output. + +--- + +## 8. Testing strategy + +| Area | Test type | Coverage | +| --------------------------- | ------------- | ------------------------------------------------------------------------------------------ | +| `formatBerthRange` | Unit (vitest) | All edge cases listed in §4.6 | +| Recommender ranking | Unit | Synthetic berth fixtures across all 4 tiers + amenity filters + oversize cap | +| `reconcilePdfWithBerth` | Unit | Empty-CRM auto-fill, conflict-detected, all-match no-op | +| PDF parser | Integration | AcroForm sample, OCR sample (using a sample PDF the user will provide), AI fallback mocked | +| Recommender API | Integration | E2E from interest detail to add-to-interest | +| Public `/api/public/berths` | Integration | Unauth call, response shape matches old NocoDB shape, status mapping | +| Send-out flows | Integration | Email lands with correct attachment + correct from-address; audit row written | +| Multi-berth EOI | Integration | Documenso payload includes correct `eoi_berth_range` | +| Public-map filter | Integration | EOI-bundle-only berth shows as available; specific-interest berth shows as under-interest | + +--- + +## 9. Pending items / open questions — updated 2026-05-05 + +1. **Sample PDF** — Reviewed (`berth_pdf_example/Berth_Spec_Sheet_A1.pdf`). 0 AcroForm fields confirmed via `pdf-lib`. **OCR with positional heuristics is the primary parser path.** AcroForm tier is built defensively but won't match this PDF family. AI fallback only when OCR confidence dips below threshold. +2. **PDF format observations** (drives parser layout-aware extraction): + - Header line: `BERTH NUMBER` then ` ` (e.g. "A1" + "200'"). + - Right column, top: dimensional fields in `Label: / ` format — Length, Width, Water Depth. + - Right column, mid: extra fields in `Label: ` form — Bow Facing, Pontoon, Power Capacity (kW), Voltage at 60 Hz, Max. draught of vessel. + - Mid-page: `PURCHASE PRICE:` block (Fee Simple OR Strata Lot tenure); `WEEK HIGH / LOW`; `DAY HIGH / LOW`; pricing-validity date sentence ("CONFIRMED THROUGH UNTIL "). + - Specifications block: Mooring Type, Cleat Type, Cleat Capacity, Bollard Type, Bollard Capacity, Access — all per-berth. + - Amenities + Services blocks: appear constant across all berths (port-wide). **Not parsed per-berth; modelled as port-level configuration.** +3. **Schema gaps revealed by the PDF** — five berth columns to add in Phase 0: + - `weekly_rate_high_usd numeric` / `weekly_rate_low_usd numeric` + - `daily_rate_high_usd numeric` / `daily_rate_low_usd numeric` + - `pricing_valid_until date` (the "ALL PRICES ABOVE ARE CONFIRMED THROUGH UNTIL " line — used to flag stale pricing on the berth detail page) +4. **Sample brochure** — Not yet shared. Lower priority; brochure UI doesn't depend on parsing the brochure file. +5. **SMTP app password for `sales@portnimara.com`** — Will be obtained close to production cutover. Not blocking Phases 0–6. Phase 7 ships the email-account admin UI immediately; the credential gets entered when available. +6. **`CRM_PUBLIC_URL`** — Confirmed: `https://crm.portnimara.com` once live. **Configurable via env var `CRM_PUBLIC_URL` on the website repo, not hard-coded.** Phase 3 endpoint must work when accessed via that hostname. +7. **Tenure/lease decoration** — `tenure_type / tenure_years / tenure_start_date / tenure_end_date` already exist on `berths`. The PDF's "Fee Simple OR Strata Lot" maps to a per-berth `tenure_type` value. Add `'fee_simple'` and `'strata_lot'` to the allowed enum. +8. **AI parse OPENAI_API_KEY** — Already optional in env. The AI fallback path checks `if (!env.OPENAI_API_KEY) return null` and surfaces a clear error in the rep UI if AI is requested but unavailable. + +--- + +## 10. Out of scope (for this branch) + +- OAuth-per-rep email (deferred; configurable design lets us add it later as another credential type). +- CRM-side regeneration of berth PDFs from a designed template (PDFs are external uploads only). +- Public-form amenity capture (rep-driven filters only for now). +- Decommissioning the NocoDB Berths table (happens after the cutover is verified; separate cleanup commit). +- /clients and /interests view-saving improvements beyond the column redesign. +- Postgres `bytea` storage as a third backend option (rejected per §4.7a — pluggable s3/filesystem covers the "no MinIO required" use case without WAL/backup bloat). +- GDPR cascade behavior for `document_sends` (left as `OPEN` in §14.10; default lean: anonymize-PII keep metadata; revisit when Phase 7 schema lands). +- Multi-node filesystem-backend support (filesystem requires single-node; multi-node deployments must use s3-compatible). + +--- + +## 11. Risks and mitigations + +| Risk | Mitigation | +| -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Schema refactor (drop `interest.berthId`) breaks callers | **No prod data yet** (confirmed 2026-05-05) — refactor lands in one commit (drop column + add junction + update callers + migrate the small dev dataset). Every schema change ships as a Drizzle migration; `db:generate` is required for every schema-touching commit so the migration chain stays consistent for the eventual prod deployment. | +| OCR misreads silently corrupt berth data | Diff dialog mandatory for non-null conflicts; auto-fill only when CRM is null. Audit log enables rollback. | +| Mass-send via the new flows | Per-user hourly send rate limit; dry-run preview mode. | +| Documenso template change breaks the EOI bundle render | `eoi_berth_range` token added to `VALID_MERGE_TOKENS` allow-list; tests assert formatter output for golden inputs. | +| Website cutover breaks the public berth map | CRM endpoint cached (5-min TTL stale-while-revalidate); off-hours cutover; rollback is reverting one website file. NocoDB stays read-only during the parallel-run window. | +| **Brochure PDFs are 20MB+** | See §11.1 below. | + +--- + +### 11.1 Large brochure PDFs (20MB+) — handling + +Brochures are port-wide marketing PDFs and run **20MB+** (per user, 2026-05-05). This is well above what default Next.js / browser upload paths handle gracefully. Implementation requirements: + +**Upload path:** + +- Use **direct-to-MinIO presigned PUTs** for brochure (and per-berth PDF) uploads, not server-proxied multipart. The CRM hands the browser a presigned URL; the browser uploads directly; only metadata (s3_key, size, sha256) is POSTed to the CRM. Avoids hitting Next.js's body-size limit and frees the Node.js server from holding 20MB+ in memory. +- The existing MinIO infrastructure (`src/lib/minio/index.ts`) already supports presigned URL generation; reuse it. +- Server-side validation: signed URL constrains content-type to `application/pdf` and max size to **50 MB** for brochures, **15 MB** for per-berth PDFs (per-port admin can adjust both via system_settings). +- After the browser-side upload completes, CRM verifies via HEAD request that the object exists at the expected size + content-type before writing the `brochure_versions` / `berth_pdf_versions` row. Orphan blobs from failed uploads are cleaned up by a daily worker job. + +**Download / send-out path:** + +- Outbound emails attach the PDF by streaming from MinIO into nodemailer rather than buffering. The existing email service can stream attachments — confirmed via `email/index.ts` review. +- **Default attachment threshold: 15 MB raw**. Files at or below this go as an attachment; files above go as a short-lived (24 h) signed-URL download link in the email body instead. Per-port admin can override. + - Rationale (encoded size = raw × ~1.37 due to base64): 15 MB raw = ~20.5 MB encoded, which fits Gmail (25 MB), Yahoo (25 MB), AOL (25 MB), Office 365 default (20 MB) and iCloud (20 MB). May still bounce on stricter corporate gateways (~10 MB), but those are recovered via the manual-confirmed fallback below. + - Sample brochure today (`Port-Nimara-Brochure-March-2025`) is 10.26 MB raw → ~14 MB encoded → ships comfortably as attachment everywhere normal. +- **Pre-send size check is the only automatic path** — the threshold decision is made _before_ the send reaches the SMTP relay. One delivery, one outcome. Eliminates the only realistic duplicate-send scenario (forwarding chains where the company gateway accepts the attachment and Gmail rejects the forwarded copy). +- **Async-bounce handling — manual confirmation, not auto-retry**: when an SMTP size-rejection arrives asynchronously (relay returned 250, recipient gateway rejected later — common with corporate gateways behind forwarding rules), the system: + - Polls `sales@`'s inbox via IMAP (`sales_imap_host` / `_port` / `_user` / `_pass_encrypted`) for incoming bounce messages. The CRM already has `imapflow` + `mailparser` as dependencies (used elsewhere for client-conversation IMAP), so this is incremental work, not a new library. + - Parses each DSN, identifies size-related codes (`552 "Message size exceeds maximum"`, `552-5.3.4 "Message too big for system"`, `5.2.3 "Message size exceeds fixed maximum"`, etc.) via a bounce-classifier. + - Matches the bounce back to a `document_sends` row by message-id (the `messageId` column already exists on the schema). + - Marks the row with `bounce_reason` + `bounce_classified_as = 'size'`. + - Surfaces a banner on the document timeline entry: "Delivery rejected — recipient gateway said the file was too large. [Resend as link]". + - The rep clicks once to retry as a link. Idempotent (retry uses the existing `document_sends.id` with a `retried_at` flag — re-clicking does nothing if a retry already succeeded). + - **Never auto-retries an async bounce.** The forwarding-chain edge case (company.com accepts the attachment + Gmail rejects the forward) is unavoidable from the sender side; we just refuse to silently make it worse by sending the link unprompted. + +**Bounce monitoring requires IMAP credentials, not just SMTP** — the CRM cannot read bounces with send-only access. Both Gmail/Workspace and M365 deliver DSNs to the sending account's inbox; both expose IMAP at `imap.gmail.com:993` and `outlook.office365.com:993` respectively, both authenticated with the same app password as their SMTP. If the user provides only SMTP credentials, the bounce-monitor is disabled gracefully and the size-rejection banner won't appear (the send-out flow still works for everything ≤ threshold). + +**Provider-agnostic settings** — see §4.9 for the full setting list. The CRM doesn't bake in "Gmail" or "M365" assumptions; the admin UI in `/[portSlug]/admin/email` exposes a simple "Use Gmail preset" / "Use Microsoft 365 preset" button that fills in the right host/port/secure values, but the underlying storage is generic and a future swap from one to the other is a config change, not a redeploy. + +- The MinIO bucket has a per-object lifecycle rule deleting expired download tokens after 30 days. + +**Storage cost:** + +- Versioned brochures + 5–10 brochures per port × 20–50 MB each × every version retained ≈ a few hundred MB per port over the project lifetime. Cheap on MinIO. + +**Schema changes for size limits / URL strategy:** + +```sql +ALTER TABLE brochure_versions ADD COLUMN download_url_expires_at timestamp with time zone; +ALTER TABLE berth_pdf_versions ADD COLUMN download_url_expires_at timestamp with time zone; +``` + +(used to throttle regeneration of the signed URLs — only re-sign when the cached one is within 1 hour of expiry). + +**system_settings additions:** + +| Key | Default | Purpose | +| --------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| `brochure_max_upload_mb` | 50 | Per-port cap on brochure file size | +| `berth_pdf_max_upload_mb` | 15 | Per-port cap on per-berth PDF file size | +| `email_attach_threshold_mb` | 15 | Files larger than this go as a download link instead of an attachment. Auto-fallback to link on SMTP size-rejection regardless of this setting. | + +### 11.2 Schema/migration discipline (since we're working without prod data) + +User confirmed (2026-05-05) that there's no production data depending on the current CRM schema, so refactors don't need backwards-compatibility shims. **However**, every schema change still ships as a proper Drizzle migration: + +- Run `pnpm db:generate` after every schema edit. +- Verify the generated SQL matches the snapshot diff — no missing columns, no orphaned constraints. +- Verify the prevId chain in `meta/_journal.json` is contiguous (the audit pattern from the recent migration-chain repair). +- Migrations are run on dev via `pnpm db:push` (or applied directly via psql for one-off backfills like the mooring-number normalization). +- Every backfill SQL is idempotent (safe to re-run) and committed alongside the migration that introduces the columns. +- The migration that drops `interest.berth_id` runs **after** the migration that creates `interest_berths` and the data migration that copies values across. + +The "we can break things in dev" license is for **shape changes**, not for sloppy migration files. The chain has to be clean for the eventual prod cutover, where re-running migrations from scratch must produce the exact same end state. + +## 12. Definition of done + +- [ ] All vitest tests pass (currently 956 — target 1000+ after this work). +- [ ] All Playwright smoke + exhaustive projects pass. +- [ ] tsc clean. +- [ ] ESLint clean. +- [ ] CRM `/api/public/berths` returns the full berth data, gets the website cutover, and the website's NocoDB-read code path is removed. +- [ ] /clients list shows email + phone + country for every record that has the data. +- [ ] /interests list shows yacht + desired dims + EOI status. +- [ ] Recommender panel renders on every interest detail with desired dimensions; produces 5–8 ranked options. +- [ ] Berth detail page accepts PDF uploads, shows version history, performs reconcile-diff dialog on conflicts. +- [ ] Sales rep can send a berth PDF or brochure to a client; audit row appears in the activity timeline. +- [ ] Per-port admin can configure recommender weights, fall-through policy, brochures, and email-from settings. +- [ ] Super_admin can switch the storage backend between s3 and filesystem; migration CLI works in both directions. +- [ ] All berth mooring numbers in CRM match the canonical `^[A-Z]+\d+$` format (no hyphens, no leading zeros). +- [ ] Sample berth PDF (`berth_pdf_example/Berth_Spec_Sheet_A1.pdf`) parses end-to-end via OCR with positional heuristics; extracted fields match the actual berth data. +- [ ] Sample brochure (`berth_pdf_example/Port-Nimara-Brochure-March-2025_5nT92g.pdf`, 10.26 MB) sends as an attachment to a Gmail recipient without falling back to a download link. +- [ ] CLAUDE.md updated with new conventions. + +--- + +## 13. CLAUDE.md additions to land in Phase 8 + +- Multi-berth interest model: `interest_berths` is the source of truth; `interest.berthId` no longer exists. `is_primary` semantics; `is_specific_interest` semantics; `is_in_eoi_bundle` semantics. +- Berth PDFs: versioned via the active storage backend; `berths.current_pdf_version_id` always points to the latest active version. +- Brochures: per-port; default brochure marked via `is_default` flag; archived brochures retain version history. +- Send-from accounts: configurable via `system_settings`; defaults to `sales@portnimara.com` for human-touch and `noreply@portnimara.com` for automation. +- Recommender: pure SQL ranking — no AI. Per-port admin tunes weights via `system_settings`. Heat scoring applies only to fall-through berths. +- EOI bundle: render via `formatBerthRange()` for the Documenso `eoi_berth_range` merge field; CRM UI always shows individual berth chips. +- Public berths API: `/api/public/berths` is the single source of truth for the public website; output shape is stable and versioned. +- Mooring number canonical format: `^[A-Z]+\d+$` (e.g. `A1`, `B12`, `E18`) — no hyphen, no leading zeros. Stored, displayed, URL-encoded, and rendered in EOIs in this exact form. +- Storage backend: code never imports MinIO/S3 directly. All file I/O goes through `getStorageBackend()` from `src/lib/storage`. Switching backends is a `system_settings` change + `pnpm tsx scripts/migrate-storage.ts` run. +- Filesystem storage backend is **single-node only**. Multi-node deployments must use the s3-compatible backend. +- Email send-out: `document_sends` is the audit table (separate from `audit_logs` because volume + binary refs). Bounce monitoring requires IMAP credentials in addition to SMTP — without them, the size-rejection banner stays disabled. + +--- + +## 14. Edge-case audit (approved 2026-05-05) + +Severity legend: 🔴 critical (data loss/corruption, wrong-recipient sends), 🟠 high (breaks workflow), 🟡 medium (unexpected behavior, needs handling), 🟢 low (cosmetic / future-proofing). + +### 14.1 Phase 0 — NocoDB berth import + mooring normalization + +| Case | Sev | Mitigation | +| ------------------------------------------------------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Mooring number collisions across NocoDB rows | 🔴 | Unique constraint on `(port_id, mooring_number)` already in CRM schema. Import uses `ON CONFLICT (port_id, mooring_number) DO UPDATE` so collisions show as updates, not duplicates. | +| User edits berth in CRM, then NocoDB also gets edited — whose change wins? | 🟠 | Compare `berths.updated_at` against `last_imported_at`. If `updated_at > last_imported_at`, the CRM row was edited by a human after the last import — preserve it (skip overwrite). Per-row dry-run report shows what was kept vs overwritten. Run with `--force` flag to override. | +| NocoDB row deleted between imports | 🟠 | Don't auto-delete CRM-side. Mark as `import_status='orphaned'` and surface in admin UI for manual resolution. | +| Numeric fields with units appended ("63ft") or units mismatched | 🟠 | Run all numeric extractions through `parseDecimalWithUnit()` that strips trailing units, normalizes to ft. NocoDB's metric formula columns ignored — recomputed from imperial. | +| `Length` decimal(2) in NocoDB vs unbounded `numeric` in CRM — re-imports trigger spurious "changed" detection on rounding | 🟡 | Normalize CRM-side to 2 decimals on import. Diff detection compares rounded values. | +| Concurrent import (two reps trigger simultaneously) | 🟡 | `pg_advisory_lock(BERTH_IMPORT_LOCK_KEY)` for the duration. Second invocation waits or errors immediately depending on flag. | +| NocoDB API pagination (defensive) | 🟢 | Loop with `pageSize=100` until `next` is null. | +| `Map Data` JSON has missing/unexpected keys | 🟢 | Validate against zod schema; on mismatch, log + skip the map_data update for that row, don't fail the whole import. | +| Status mismatch (NocoDB has 3, CRM has more nuanced flags) | 🟡 | Map NocoDB `Available`→`available`, `Under Offer`→`under_offer`, `Sold`→`sold`. CRM-side `status_override_mode` preserved if set. | +| Mooring normalization regex edge cases (multi-letter prefix, all-digit) | 🟡 | Regex `^([A-Z]+)-?0*(\d+)$` handles single-letter, multi-letter, hyphenated, leading-zero. Pure-numeric: explicit check + log if encountered. | +| Backfill SQL runs while NocoDB import also running | 🟡 | Sequential within the same migration script: normalize first, import second. | + +### 14.2 Phase 1 — /clients + /interests column fix + +| Case | Sev | Mitigation | +| ---------------------------------------------------------------------------- | --- | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| Multiple `is_primary=true` contacts on one client | 🟠 | Add unique partial index `WHERE is_primary=true` on `client_contacts(client_id)`. Backfill: keep most recently-updated as primary, demote others. | +| Phone has `value_country='SX'` but client is actually a US visitor | 🟡 | Phone country is a _proxy_. Display column as "Country" not "Nationality". | +| Backfill race: backfill SQL runs while user edits `nationality_iso` manually | 🟢 | `WHERE nationality_iso IS NULL` clause never overwrites manually-set values. Re-running safe. | +| Client has no phone (only email or nothing) | 🟢 | Country column "-". `nationality_iso` stays null. | +| Phone parses but `value_country` is null | 🟢 | Country column "-". | +| Yacht-by-clientId join: client owns multiple yachts | 🟡 | List view shows `latestYacht.name`, picked by `currentOwnerSince` desc. Detail view shows full list. | +| Desired dimensions on `interests` partially filled (length only) | 🟡 | Render as `"60ft × ? × ?"`. Recommender treats null dimensions as "no constraint" for that axis. | +| Residential interests have a different schema | 🟢 | Already separate routes; don't mix into the marina `/interests` list. | + +### 14.3 Phase 2 — M:M schema refactor (`interest_berths`) + +| Case | Sev | Mitigation | +| ---------------------------------------------------------------------------------------- | --- | ---------------------------------------------------------------------------------------------------------------------------- | +| Existing `interest.berth_id` points to deleted berth | 🔴 | Pre-flight check before migration: orphan rows logged + halted. User decides: skip or restore. | +| Migrating from NocoDB's `Interested Parties` M:M: same interest+berth pair appears twice | 🟡 | Dedup on `(interestId, berthId)` before INSERT. | +| Schema enforces "one primary per interest" but data migration writes more than one | 🟠 | Migration script asserts only one primary per interest before INSERT. If multiple candidates, pick by EOI status > recency. | +| Berth deleted while linked in `interest_berths` (`onDelete: 'restrict'`) | 🟠 | UI surfaces "Cannot archive — used in N active interests" with click-through to those interests. | +| Interest archived while linked to berths — junction rows persist | 🟢 | `onDelete: 'cascade'` from interest. Archive (`archived_at`) is soft; recommender filters out archived interests by default. | +| Two reps simultaneously edit `is_specific_interest` on the same row | 🟡 | Last-write-wins acceptable; surface socket event for realtime UI sync. | +| Toggle to `is_specific_interest=false` after EOI signed | 🟡 | Allowed (rep might want to mark as "not actively pitched anymore" even after EOI). Doesn't auto-clear `is_in_eoi_bundle`. | +| User tries to add a berth to interest that's already linked | 🟢 | UNIQUE on `(interest_id, berth_id)` → upsert: open the role-toggle dialog instead of inserting duplicate. | + +### 14.4 Phase 4 — Recommender + +| Case | Sev | Mitigation | +| -------------------------------------------------------------- | --- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Interest has no desired dimensions | 🟠 | Panel shows "Set desired dimensions to see recommendations" with inline form to enter them. | +| Only some dimensions specified (length only) | 🟡 | Recommender treats unspecified as "no constraint." Banner: "Tighter recommendations available when you add Width and Draft." | +| All feasible berths exceed oversize cap | 🟡 | Empty result with helper text: "No berths within 30% buffer. Show all feasible? [button]" — clicking expands cap to ∞ for that view. | +| Desired width > every berth's width | 🟡 | Empty + "No berths can fit a yacht of this width." | +| Berth status changes during the recommender session | 🟡 | Refresh button + 60-sec auto-refresh. Cache key includes berth status. | +| Recommender SQL on a port with 1000+ berths | 🟡 | Single CTE chain with proper indexes. Benchmark target: <100ms p95 on 5000 berths + 500 interests. | +| Berth has interest from same client as the current one | 🟢 | Highlight: "Already linked to this client's interest from ". | +| Heat score from interests that closed _won_ (not fall-through) | 🟠 | Heat only fires when `outcome` is `lost_*` or `cancelled`. Won outcomes mean berth is sold (Tier hidden). | +| Amenity filter where no berth satisfies | 🟢 | Empty + "Adjust filters" prompt. | +| Tier ladder hides D, but rep wants to see all | 🟢 | "Show all stages" toggle in panel header. Doesn't change per-port default. | +| Interest is residential type, panel still tries to render | 🟡 | Panel only mounts on `/interests`, not `/residential/interests`. Explicit guard. | + +### 14.5 Phase 5 — EOI bundle + range formatter + +| Case | Sev | Mitigation | +| ------------------------------------------------------ | --- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Empty bundle | 🟢 | Returns `""`. EOI generation refuses to send if range is empty. | +| Single berth | 🟢 | `["A1"]` → `"A1"`, not `"A1-A1"`. Tested. | +| Skip numbers (A1, A2, A4) | 🟢 | `"A1-A2, A4"`. Tested. | +| Mixed letters in unsorted input | 🟢 | Sort by `(letter, number)` first. Tested. | +| Duplicate moorings in input | 🟢 | Dedup via `Set` before formatting. | +| Mooring with non-numeric suffix (e.g. `A1a`) | 🟡 | Regex `^([A-Z]+)(\d+)$` — non-conforming inputs pass through unchanged, joined by `, `. Logged warning. | +| Pure-numeric mooring (no letter prefix) | 🟡 | Same fallthrough. Edge case, documented. | +| Very long ranges (50+ consecutive) | 🟢 | "A1-A50" string fits. If exceeds Documenso PDF field max length, truncate with "+N more" and log. | +| Bundle generated for one rep while another modifies it | 🟡 | EOI generation reads the current bundle at trigger time; modifications after that go into the next EOI. Audit log captures snapshot. | + +### 14.6 Phase 6 — Per-berth PDF (storage + parser) + +| Case | Sev | Mitigation | +| ---------------------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Upload of 0-byte file | 🟠 | Server-side validation: reject `Content-Length: 0`. Pre-check before generating presigned URL. | +| Non-PDF disguised as PDF (wrong magic bytes) | 🔴 | Read first 5 bytes of upload, verify `%PDF-`. On mismatch, delete the MinIO object and reject. | +| Upload exceeds size limit | 🟠 | Presigned URL has `content-length-range` constraint. Failed upload: orphan cleaned by daily worker. | +| Two reps upload simultaneously to the same berth | 🟡 | Storage key uses `gen_random_uuid()` (no version-number conflict at storage layer); DB row decides which one is `current`. Wrap row insert in serializable transaction. | +| Parse fails entirely (corrupt PDF, OCR errors out) | 🟠 | Show as "Saved, parsing failed" with "Retry parse" button. PDF stored regardless; parse failure doesn't block save. | +| Parser extracts duplicate fields | 🟡 | Take first match by Y-coordinate (top of page), warn on duplicate. | +| Parser extracts numbers that are unit-ambiguous | 🟢 | Sample PDF pattern is ` / ` — parse imperial; verify imperial × 0.3048 ≈ metric (within 1% tolerance). Mismatch → flag for review. | +| OCR confuses 0/O, 1/I/l | 🟡 | Numeric-only fields validated as numbers. Mooring number compared against the berth being uploaded to. | +| Mooring number in PDF doesn't match the berth being uploaded to | 🟠 | Warning dialog: "This PDF's berth number is `B5` but you're uploading to berth `A1`. Continue?" Force re-confirm. | +| Pricing valid-until date in the past at upload time | 🟢 | Accept; UI shows "Pricing data may be stale" chip. Don't block. | +| Imperial vs metric values disagree in PDF (designer typo) | 🟡 | Tolerance check (±1%); above that, log + use imperial as source of truth + warn. | +| Rollback to a version that was uploaded then archived | 🟢 | Versions kept indefinitely (no hard delete). Archive flag is "hide from history view"; rollback can target it. | +| Berth hard-deleted while versions exist | 🟡 | `onDelete: 'cascade'` from berth → versions, but MinIO blobs NOT auto-deleted (referenced from `document_sends`). Daily orphan-cleanup worker handles. | +| Parse extracts data with conflict; rep accepts; immediately reverts via "rollback to v3" | 🟡 | Rollback restores prior PDF as current but does NOT re-parse and re-update DB (separate "extract data from this version" action). Documented behavior. | + +### 14.7 Phase 7 — Sales send-outs + +| Case | Sev | Mitigation | +| ------------------------------------------------------------------ | --- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Recipient email contains typos | 🔴 | Pre-send confirmation modal with the exact recipient email shown. No quick-send for first-time recipients. | +| Body markdown contains unresolved merge fields | 🟠 | Pre-send dry-run renders the body and lists unresolved `{{tokens}}`. Send blocked until resolved. | +| Body markdown with HTML/script injection | 🔴 | Markdown rendering through DOMPurify before email assembly. Tests include XSS payloads in body. | +| PDF version active at send time gets rolled back later | 🟢 | `document_sends.berth_pdf_version_id` references the exact version row. Versions immutable. Audit always shows what was sent. | +| Brochure marked default but archived | 🟡 | Send UI filters by `is_default=true AND archived_at IS NULL`. Fallback to next-newest non-archived; UI flags "default brochure was archived; using ." | +| No brochure exists for the port | 🟢 | "Send brochure" button hidden/disabled. Tooltip: "Upload a brochure in admin first." | +| Per-user send rate limit exceeded | 🟡 | Defaults: 50 sends/user/hour for individual sends, 10 bulk-send operations/user/hour. Clear error: "Hit hourly limit, retry at HH:MM." Audit captures rejected attempts. | +| SMTP credentials wrong/expired | 🔴 | Send fails with clear error: "SMTP authentication failed. Admin: please update credentials in /admin/email." Failed `document_sends` row created with `failed_at` so rep sees it didn't go. | +| IMAP credentials missing | 🟢 | Bounce monitor disabled with banner: "IMAP not configured — bounce-rejection banners won't appear in document timelines." Sends still work. | +| Multiple primary emails on a client (defensive) | 🟢 | Picker shows all emails in dropdown; rep selects. No silent "use the first one." | +| Recipient unsubscribed | 🟡 | Out of scope for this branch. Future: `unsubscribed_at` column on clients; send UI checks and blocks. Documented in §10. | +| Email body length limits | 🟢 | Body Markdown 50KB max; UI shows char count. | +| Bulk send: one of N recipients fails | 🟠 | Each recipient = separate `document_sends` row + separate SMTP transaction. One failure doesn't block others. Per-recipient status visible in bulk-send result panel. | +| Two reps send the same berth PDF to the same client within seconds | 🟢 | Both succeed; both rows in `document_sends`. Not a duplicate concern. | +| Customizing body to remove the system-added link/attachment | 🟠 | Link/attachment added by system AFTER body merge — rep can't accidentally remove it. Custom body input is purely message text. | + +### 14.8 Phase 3 — Website cutover + +| Case | Sev | Mitigation | +| -------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CRM_PUBLIC_URL` points to wrong env (staging website hits prod CRM) | 🔴 | Website logs resolved URL on startup. CRM health-check returns env name (`prod`/`staging`/`dev`); website refuses to start if env mismatch. | +| CRM endpoint slow → website times out | 🟠 | Website's `$fetch` has 5s timeout. On timeout, falls back to in-memory cache from last successful fetch (5-min TTL). | +| CRM endpoint completely down | 🟡 | Same fallback. Website logs failure; uptime alerting fires. | +| Status mapping returns a value the public `Berth.Status` enum doesn't know | 🟡 | CRM endpoint validates return shape against public enum before responding. Unknown value → 500 + clear error log. | +| Concurrent reads on cache invalidation (thundering herd) | 🟡 | Next.js `revalidate` with `stale-while-revalidate` semantics. Spike handling via cache lock. | +| Website still has old NocoDB code path in deployed bundle | 🟢 | Website cutover commit is one file. Single deploy gets it through. | +| Mooring number format drift between CRM and what the website URL expects | 🔴 | Phase 0 normalization runs before any cutover. Integration tests assert format matches. Critical because `/berths/A1` would 404 if CRM stores `A-01`. | +| Public endpoint exposes data that should be private | 🟠 | Output is deliberate allowlist of fields, not `SELECT *`. Status-mapping logic ensures `archived_at IS NOT NULL` berths are filtered out. | + +### 14.9 Async bounce monitor (Phase 7 admin config) + +| Case | Sev | Mitigation | +| -------------------------------------------------------------------------- | --- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Bounce email parsed wrong, classified as size when actually content-policy | 🟡 | Conservative classifier: only flag as `bounce_classified_as='size'` when status code matches known list (`552 5.2.3`, `5.3.4`). Other rejections flagged generically. | +| Bounce email with no `Message-ID` references | 🟢 | Match by recipient + timestamp window. Fall back to "could not match" if both fail. | +| IMAP connection drops mid-fetch | 🟡 | Workers reconnect with backoff; idempotent processing on retry (each bounce email's `Message-ID` is dedup key). | +| Bounce arrives for a `document_sends` row that's been deleted | 🟢 | Match-failure logged; bounce-handler treats as no-op. | +| Forwarding chain (corp accepts → personal Gmail rejects) | 🟠 | Don't auto-retry. Surface rejection to rep. (Documented in §11.1.) | +| Out-of-office auto-replies wrongly classified as bounces | 🟢 | Classifier requires DSN content-type (`message/delivery-status`) — auto-replies don't match. | + +### 14.9a Phase 6a — Pluggable storage backend + +| Case | Sev | Mitigation | +| ------------------------------------------------------------------------------------- | --- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Migration interrupted halfway (process crash, deploy, network) | 🟠 | Resumable: per-row `migrated_at` markers in a temp `_storage_migration_progress` table. Re-running picks up where it left off. Verify sha256 on each row before marking. | +| Disk fills up during migration into filesystem mode | 🟠 | Pre-flight check sums total bytes from source backend, compares against `df` available space + 20% margin. Aborts before starting if insufficient. | +| Filesystem mode on a multi-node deployment (each node has own disk) | 🔴 | Filesystem backend refuses to start when `process.env.MULTI_NODE_DEPLOYMENT=true`. Health check fails on second node when it can't read files written by first node. Admin docs spell out: filesystem = single-node only. | +| Symlinks / file permission issues in filesystem mode | 🟡 | Storage root is created with `0o700` permissions (owner-only). Symlink attacks blocked: realpath check that resolved path is within storage root. | +| Path traversal via filename in filesystem mode | 🔴 | Storage keys validated against `^[a-zA-Z0-9/_.-]+$` regex; reject anything containing `..` or absolute paths. Files always written to `path.join(STORAGE_ROOT, sanitizedKey)`. | +| Switch backend mid-upload (admin clicks "switch" while a rep is uploading a brochure) | 🟠 | Migration acquires advisory lock; new uploads during migration are queued (block on the lock) and complete after migration finishes. Upload UI shows "System maintenance in progress, please wait..." for the few seconds it takes. | +| Both backends temporarily reachable during migration; rep uploads to old backend | 🟡 | The factory always reads `system_settings.storage_backend` fresh on each request (cached briefly). After atomic-flip post-migration, the next request goes to the new backend. Race window is sub-second; if it does happen, the file's in the old backend; daily reconcile worker catches it. | +| Backend-specific URL formats leak to clients (S3 signed URL vs filesystem proxy URL) | 🟡 | Client code only sees the URL string; never inspects/manipulates it. URL format changes are transparent. | +| Downgrading from S3 → filesystem with files larger than disk-cache-friendly | 🟡 | No special handling needed for read; both backends stream. UI warns admin if total size > 50% of free disk during pre-flight. | +| sha256 mismatch on a migrated file (corruption during transfer) | 🟠 | Migration aborts on any sha256 mismatch; the row is logged and the migration overall fails. Admin investigates manually before retrying. | +| Backup story differs per backend | 🟢 | Admin UI shows a "Backup notes" section explaining: S3 mode = "Configure your S3 provider's lifecycle / replication policies"; filesystem mode = "Include the storage/ directory in your backup tool". | +| `STORAGE_FILESYSTEM_ROOT` not writable when first switching to filesystem | 🟠 | Pre-flight: write a sentinel file + delete it. If it fails, surface error in admin UI before starting migration. | +| Existing MinIO references in code use the literal string `s3_key` everywhere | 🟢 | Phase 6a renames columns to `storage_key`. No prod data, so safe to ALTER TABLE. All callers updated in same commit. | +| Filesystem backend served via Next.js — risk of Next.js processing the file | 🟡 | Use `NextResponse` with a `ReadableStream` body and explicit `Content-Type` + `Content-Disposition`. No Next.js image optimization, no edge runtime — Node runtime only for the storage proxy route. | +| Token-replay attack on filesystem proxy URLs | 🟠 | Tokens are HMAC-signed (key from `STORAGE_PROXY_HMAC_SECRET` env), include the storage key + expiry in payload, single-use enforced via short Redis cache of recently-seen tokens. | + +### 14.10 Cross-cutting + +| Case | Sev | Mitigation | +| -------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Multi-port tenant isolation in recommender | 🔴 | Every recommender query joins through `interest.port_id` and filters berths to same port. Tested with port-A interest + port-B berths fixture. | +| Permission escalation: who configures SMTP credentials? | 🔴 | Per-port admin role only. Super_admin can do all ports. Reps cannot see encrypted secrets. | +| Audit log volume — every send + every berth edit logged | 🟡 | Existing `audit_logs` partitioned. Send-out events go to `document_sends` (separate, lighter). | +| Time zone for `pricing_valid_until` expiry | 🟡 | Stored as `date`. Expiry uses port's configured timezone (system_setting). UTC fallback. | +| Concurrent berth edit + send composer open: stale snapshot | 🟢 | `document_sends.berth_pdf_version_id` captured at send-click time, not composer-open. Edits between reflect in sent version. | +| Per-port system_setting collision (two admins set same key simultaneously) | 🟡 | `system_settings.updated_at` + optimistic concurrency. UI re-fetches on save error. | +| Encryption key rotation for `*_pass_encrypted` fields | 🟡 | Use existing `EMAIL_CREDENTIAL_KEY` env. Rotation = ops concern (re-encrypt with new key). Documented in CLAUDE.md as future ops runbook. | +| GDPR / right-to-be-forgotten: client deletion cascades to `document_sends` | 🟠 | **OPEN**: decide between (a) full delete, (b) anonymize PII keep metadata, (c) keep all (legal hold). Default lean: (b). Revisit when Phase 7 schema lands. | +| Data leak via diff dialog: PDF parser exposes data the rep shouldn't see | 🟠 | Upload UI confirms target berth + scopes parser to target's port. Cross-port mismatch → reject upload. | + +### 14.11 Critical-item summary (🔴) + +Eleven critical items, each requiring a test before its phase ships: + +1. NocoDB mooring collisions → unique constraint + ON CONFLICT +2. Non-PDF disguised upload → magic-byte check +3. Recipient email typos → pre-send confirmation +4. XSS in body markdown → DOMPurify + XSS payload tests +5. SMTP credentials silently failing → loud error + failed `document_sends` row +6. Wrong-environment `CRM_PUBLIC_URL` → health-check env match +7. Mooring format drift breaking `/berths/A1` URLs → Phase 0 normalization gates Phase 3 +8. Multi-port isolation in recommender → explicit `port_id` filter + cross-port test +9. Permission escalation on SMTP creds → per-port admin only, no rep visibility +10. Filesystem backend in multi-node deployment → refuse to start; documented + health-check enforced +11. Path traversal via storage key in filesystem mode → strict regex validation + path realpath check diff --git a/docs/eoi-documenso-field-mapping.md b/docs/eoi-documenso-field-mapping.md index 02c5ce7..6ae3c32 100644 --- a/docs/eoi-documenso-field-mapping.md +++ b/docs/eoi-documenso-field-mapping.md @@ -19,18 +19,23 @@ The template exposes eight text fields (`formValues` keys) and two boolean check ## Field mapping -| Documenso key | Type | Legacy source | New `EoiContext` path | Notes | -| -------------- | ------- | --------------------------- | ----------------------------------------------------- | ------------------------------------------------------------------------- | -| `Name` | text | `interest['Full Name']` | `context.client.fullName` | The interest's point-of-contact client (billing signer). | -| `Email` | text | `interest['Email Address']` | `context.client.primaryEmail` | Primary email contact from `client_contacts`. | -| `Address` | text | `interest['Address']` | concat `context.client.address.{street,city,country}` | Concatenate street, city, country with `', '`. Empty if address is null. | -| `Yacht Name` | text | `interest['Yacht Name']` | `context.yacht.name` | Yacht is now a first-class row; pulled via `interest.yachtId`. | -| `Length` | text | `interest['Length']` | `context.yacht.lengthFt` | Send as string. Documenso doesn't enforce numeric format. | -| `Width` | text | `interest['Width']` | `context.yacht.widthFt` | Same. | -| `Draft` | text | `interest['Depth']` | `context.yacht.draftFt` | Legacy field was named "Depth" in NocoDB; Documenso key is "Draft". | -| `Berth Number` | text | `berthNumbers` (joined) | `context.berth.mooringNumber` | One berth per reservation. Multi-berth case was multi-interest in legacy. | -| `Lease_10` | boolean | hardcoded `false` | `false` | Hardcoded — legacy flow defaults to Purchase (not Lease). | -| `Purchase` | boolean | hardcoded `true` | `true` | Hardcoded — legacy flow defaults to Purchase. | +The legacy template (Documenso template `8`, configured in production) auto-fills exactly the fields below. All eight text fields + two booleans are populated by `buildDocumensoPayload()` from the resolved `EoiContext`. Anything else on the form (signature, date, terms acknowledgment) is filled in by the client inside Documenso. + +| Documenso key | Type | Legacy source | New `EoiContext` path | Notes | +| -------------- | ------- | --------------------------- | ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Name` | text | `interest['Full Name']` | `context.client.fullName` | The interest's point-of-contact client (billing signer). | +| `Email` | text | `interest['Email Address']` | `context.client.primaryEmail` | Primary email contact from `client_contacts`. | +| `Address` | text | `interest['Address']` | concat `context.client.address.{street,city,country}` | Concatenate street, city, country with `', '`. Empty if address is null. | +| `Yacht Name` | text | `interest['Yacht Name']` | `context.yacht.name` | Yacht is now a first-class row; pulled via `interest.yachtId`. Empty string when no yacht is linked yet. | +| `Length` | text | `interest['Length']` | `context.yacht.lengthFt` | Boat dimension. Send as string. Documenso doesn't enforce numeric format. Empty string when not applicable. | +| `Width` | text | `interest['Width']` | `context.yacht.widthFt` | Same. | +| `Draft` | text | `interest['Depth']` | `context.yacht.draftFt` | Legacy field was named "Depth" in NocoDB; Documenso key is "Draft". | +| `Berth Number` | text | `berthNumbers` (joined) | `context.berth.mooringNumber` | The interest's PRIMARY berth (resolved via `interest_berths.is_primary=true`). Empty string when no primary set. | +| `Berth Range` | text | (new) | `context.eoiBerthRange` | **NEW IN PHASE 5** — compact range string for multi-berth EOIs (e.g. `"A1-A3, B5-B7"`) covering every junction row marked `is_in_eoi_bundle=true`. Empty string when the bundle is empty. **The live Documenso template (id `8`) does NOT yet have this field. Add a `Berth Range` text field to the template before multi-berth EOIs render the range; until then Documenso silently drops the value and only `Berth Number` (the primary mooring) renders.** | +| `Lease_10` | boolean | hardcoded `false` | `false` | Hardcoded — legacy flow defaults to Purchase (not Lease). | +| `Purchase` | boolean | hardcoded `true` | `true` | Hardcoded — legacy flow defaults to Purchase. | + +**Backwards-compatibility guarantee**: every legacy `formValues` key is still emitted with the same name and type. The only addition is `Berth Range` (Phase 5). Documenso silently ignores unknown formValues keys, so old templates that don't have `Berth Range` will simply not render it — single-berth EOIs continue to work identically. No template changes are required for legacy use. ## Document `meta` fields (non-`formValues`) diff --git a/package.json b/package.json index e58d025..e116f4e 100644 --- a/package.json +++ b/package.json @@ -52,6 +52,7 @@ "@tanstack/react-query": "^5.62.0", "@tanstack/react-query-devtools": "^5.62.0", "@tanstack/react-table": "^8.21.3", + "@types/pdfkit": "^0.17.6", "archiver": "^7.0.1", "better-auth": "^1.2.0", "bullmq": "^5.25.0", @@ -73,6 +74,7 @@ "nodemailer": "^6.9.0", "openai": "^6.27.0", "pdf-lib": "^1.17.1", + "pdfkit": "^0.18.0", "pino": "^9.5.0", "pino-pretty": "^13.0.0", "postgres": "^3.4.0", @@ -81,6 +83,7 @@ "react-dom": "^19.0.0", "react-hook-form": "^7.54.0", "recharts": "^3.8.0", + "sharp": "^0.34.5", "socket.io": "^4.8.0", "socket.io-client": "^4.8.0", "sonner": "^1.7.0", diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index 161fe94..2366097 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -101,6 +101,9 @@ importers: '@tanstack/react-table': specifier: ^8.21.3 version: 8.21.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@types/pdfkit': + specifier: ^0.17.6 + version: 0.17.6 archiver: specifier: ^7.0.1 version: 7.0.1 @@ -164,6 +167,9 @@ importers: pdf-lib: specifier: ^1.17.1 version: 1.17.1 + pdfkit: + specifier: ^0.18.0 + version: 0.18.0 pino: specifier: ^9.5.0 version: 9.14.0 @@ -188,6 +194,9 @@ importers: recharts: specifier: ^3.8.0 version: 3.8.0(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react-is@18.3.1)(react@19.2.4)(redux@5.0.1) + sharp: + specifier: ^0.34.5 + version: 0.34.5 socket.io: specifier: ^4.8.0 version: 4.8.3 @@ -1153,64 +1162,138 @@ packages: resolution: {integrity: sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==} engines: {node: '>=18.18'} + '@img/colour@1.1.0': + resolution: {integrity: sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ==} + engines: {node: '>=18'} + '@img/sharp-darwin-arm64@0.33.5': resolution: {integrity: sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [arm64] os: [darwin] + '@img/sharp-darwin-arm64@0.34.5': + resolution: {integrity: sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [darwin] + '@img/sharp-darwin-x64@0.33.5': resolution: {integrity: sha512-fyHac4jIc1ANYGRDxtiqelIbdWkIuQaI84Mv45KvGRRxSAa7o7d1ZKAOBaYbnepLC1WqxfpimdeWfvqqSGwR2Q==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [x64] os: [darwin] + '@img/sharp-darwin-x64@0.34.5': + resolution: {integrity: sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [darwin] + '@img/sharp-libvips-darwin-arm64@1.0.4': resolution: {integrity: sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg==} cpu: [arm64] os: [darwin] + '@img/sharp-libvips-darwin-arm64@1.2.4': + resolution: {integrity: sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==} + cpu: [arm64] + os: [darwin] + '@img/sharp-libvips-darwin-x64@1.0.4': resolution: {integrity: sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ==} cpu: [x64] os: [darwin] + '@img/sharp-libvips-darwin-x64@1.2.4': + resolution: {integrity: sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==} + cpu: [x64] + os: [darwin] + '@img/sharp-libvips-linux-arm64@1.0.4': resolution: {integrity: sha512-9B+taZ8DlyyqzZQnoeIvDVR/2F4EbMepXMc/NdVbkzsJbzkUjhXv/70GQJ7tdLA4YJgNP25zukcxpX2/SueNrA==} cpu: [arm64] os: [linux] libc: [glibc] + '@img/sharp-libvips-linux-arm64@1.2.4': + resolution: {integrity: sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==} + cpu: [arm64] + os: [linux] + libc: [glibc] + '@img/sharp-libvips-linux-arm@1.0.5': resolution: {integrity: sha512-gvcC4ACAOPRNATg/ov8/MnbxFDJqf/pDePbBnuBDcjsI8PssmjoKMAz4LtLaVi+OnSb5FK/yIOamqDwGmXW32g==} cpu: [arm] os: [linux] libc: [glibc] + '@img/sharp-libvips-linux-arm@1.2.4': + resolution: {integrity: sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==} + cpu: [arm] + os: [linux] + libc: [glibc] + + '@img/sharp-libvips-linux-ppc64@1.2.4': + resolution: {integrity: sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==} + cpu: [ppc64] + os: [linux] + libc: [glibc] + + '@img/sharp-libvips-linux-riscv64@1.2.4': + resolution: {integrity: sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==} + cpu: [riscv64] + os: [linux] + libc: [glibc] + '@img/sharp-libvips-linux-s390x@1.0.4': resolution: {integrity: sha512-u7Wz6ntiSSgGSGcjZ55im6uvTrOxSIS8/dgoVMoiGE9I6JAfU50yH5BoDlYA1tcuGS7g/QNtetJnxA6QEsCVTA==} cpu: [s390x] os: [linux] libc: [glibc] + '@img/sharp-libvips-linux-s390x@1.2.4': + resolution: {integrity: sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==} + cpu: [s390x] + os: [linux] + libc: [glibc] + '@img/sharp-libvips-linux-x64@1.0.4': resolution: {integrity: sha512-MmWmQ3iPFZr0Iev+BAgVMb3ZyC4KeFc3jFxnNbEPas60e1cIfevbtuyf9nDGIzOaW9PdnDciJm+wFFaTlj5xYw==} cpu: [x64] os: [linux] libc: [glibc] + '@img/sharp-libvips-linux-x64@1.2.4': + resolution: {integrity: sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==} + cpu: [x64] + os: [linux] + libc: [glibc] + '@img/sharp-libvips-linuxmusl-arm64@1.0.4': resolution: {integrity: sha512-9Ti+BbTYDcsbp4wfYib8Ctm1ilkugkA/uscUn6UXK1ldpC1JjiXbLfFZtRlBhjPZ5o1NCLiDbg8fhUPKStHoTA==} cpu: [arm64] os: [linux] libc: [musl] + '@img/sharp-libvips-linuxmusl-arm64@1.2.4': + resolution: {integrity: sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==} + cpu: [arm64] + os: [linux] + libc: [musl] + '@img/sharp-libvips-linuxmusl-x64@1.0.4': resolution: {integrity: sha512-viYN1KX9m+/hGkJtvYYp+CCLgnJXwiQB39damAO7WMdKWlIhmYTfHjwSbQeUK/20vY154mwezd9HflVFM1wVSw==} cpu: [x64] os: [linux] libc: [musl] + '@img/sharp-libvips-linuxmusl-x64@1.2.4': + resolution: {integrity: sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==} + cpu: [x64] + os: [linux] + libc: [musl] + '@img/sharp-linux-arm64@0.33.5': resolution: {integrity: sha512-JMVv+AMRyGOHtO1RFBiJy/MBsgz0x4AWrT6QoEVVTyh1E39TrCUpTRI7mx9VksGX4awWASxqCYLCV4wBZHAYxA==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1218,6 +1301,13 @@ packages: os: [linux] libc: [glibc] + '@img/sharp-linux-arm64@0.34.5': + resolution: {integrity: sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [linux] + libc: [glibc] + '@img/sharp-linux-arm@0.33.5': resolution: {integrity: sha512-JTS1eldqZbJxjvKaAkxhZmBqPRGmxgu+qFKSInv8moZ2AmT5Yib3EQ1c6gp493HvrvV8QgdOXdyaIBrhvFhBMQ==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1225,6 +1315,27 @@ packages: os: [linux] libc: [glibc] + '@img/sharp-linux-arm@0.34.5': + resolution: {integrity: sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm] + os: [linux] + libc: [glibc] + + '@img/sharp-linux-ppc64@0.34.5': + resolution: {integrity: sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [ppc64] + os: [linux] + libc: [glibc] + + '@img/sharp-linux-riscv64@0.34.5': + resolution: {integrity: sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [riscv64] + os: [linux] + libc: [glibc] + '@img/sharp-linux-s390x@0.33.5': resolution: {integrity: sha512-y/5PCd+mP4CA/sPDKl2961b+C9d+vPAveS33s6Z3zfASk2j5upL6fXVPZi7ztePZ5CuH+1kW8JtvxgbuXHRa4Q==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1232,6 +1343,13 @@ packages: os: [linux] libc: [glibc] + '@img/sharp-linux-s390x@0.34.5': + resolution: {integrity: sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [s390x] + os: [linux] + libc: [glibc] + '@img/sharp-linux-x64@0.33.5': resolution: {integrity: sha512-opC+Ok5pRNAzuvq1AG0ar+1owsu842/Ab+4qvU879ippJBHvyY5n2mxF1izXqkPYlGuP/M556uh53jRLJmzTWA==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1239,6 +1357,13 @@ packages: os: [linux] libc: [glibc] + '@img/sharp-linux-x64@0.34.5': + resolution: {integrity: sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [linux] + libc: [glibc] + '@img/sharp-linuxmusl-arm64@0.33.5': resolution: {integrity: sha512-XrHMZwGQGvJg2V/oRSUfSAfjfPxO+4DkiRh6p2AFjLQztWUuY/o8Mq0eMQVIY7HJ1CDQUJlxGGZRw1a5bqmd1g==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1246,6 +1371,13 @@ packages: os: [linux] libc: [musl] + '@img/sharp-linuxmusl-arm64@0.34.5': + resolution: {integrity: sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [linux] + libc: [musl] + '@img/sharp-linuxmusl-x64@0.33.5': resolution: {integrity: sha512-WT+d/cgqKkkKySYmqoZ8y3pxx7lx9vVejxW/W4DOFMYVSkErR+w7mf2u8m/y4+xHe7yY9DAXQMWQhpnMuFfScw==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -1253,23 +1385,53 @@ packages: os: [linux] libc: [musl] + '@img/sharp-linuxmusl-x64@0.34.5': + resolution: {integrity: sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [linux] + libc: [musl] + '@img/sharp-wasm32@0.33.5': resolution: {integrity: sha512-ykUW4LVGaMcU9lu9thv85CbRMAwfeadCJHRsg2GmeRa/cJxsVY9Rbd57JcMxBkKHag5U/x7TSBpScF4U8ElVzg==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [wasm32] + '@img/sharp-wasm32@0.34.5': + resolution: {integrity: sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [wasm32] + + '@img/sharp-win32-arm64@0.34.5': + resolution: {integrity: sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [win32] + '@img/sharp-win32-ia32@0.33.5': resolution: {integrity: sha512-T36PblLaTwuVJ/zw/LaH0PdZkRz5rd3SmMHX8GSmR7vtNSP5Z6bQkExdSK7xGWyxLw4sUknBuugTelgw2faBbQ==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [ia32] os: [win32] + '@img/sharp-win32-ia32@0.34.5': + resolution: {integrity: sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [ia32] + os: [win32] + '@img/sharp-win32-x64@0.33.5': resolution: {integrity: sha512-MpY/o8/8kj+EcnxwvrP4aTJSWw/aZ7JIGR4aBeZkZw5B7/Jn+tY9/VNwtcoGmdT7GfggGIU4kygOMSbYnOrAbg==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [x64] os: [win32] + '@img/sharp-win32-x64@0.34.5': + resolution: {integrity: sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [win32] + '@ioredis/commands@1.5.0': resolution: {integrity: sha512-eUgLqrMf8nJkZxT24JvVRrQya1vZkQh8BBeYNwGDqa5I0VUi8ACx7uFvAaLxintokpTenkK6DASvo/bvNbBGow==} @@ -1390,10 +1552,18 @@ packages: cpu: [x64] os: [win32] + '@noble/ciphers@1.3.0': + resolution: {integrity: sha512-2I0gnIVPtfnMw9ee9h1dJG7tp81+8Ob3OJb3Mv37rx5L40/b0i7djjCVvGOVqc9AEIQyvyu1i6ypKdFw8R8gQw==} + engines: {node: ^14.21.3 || >=16} + '@noble/ciphers@2.1.1': resolution: {integrity: sha512-bysYuiVfhxNJuldNXlFEitTVdNnYUc+XNJZd7Qm2a5j1vZHgY+fazadNFWFaMK/2vye0JVlxV3gHmC0WDfAOQw==} engines: {node: '>= 20.19.0'} + '@noble/hashes@1.8.0': + resolution: {integrity: sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==} + engines: {node: ^14.21.3 || >=16} + '@noble/hashes@2.0.1': resolution: {integrity: sha512-XlOlEbQcE9fmuXxrVTXCTlG2nlRXa9Rj3rr5Ue/+tX+nmkgbX720YHh0VR3hBF9xDvwnb8D2shVGOwNx+ulArw==} engines: {node: '>= 20.19.0'} @@ -2328,6 +2498,9 @@ packages: '@types/nodemailer@6.4.23': resolution: {integrity: sha512-aFV3/NsYFLSx9mbb5gtirBSXJnAlrusoKNuPbxsASWc7vrKLmIrTQRpdcxNcSFL3VW2A2XpeLEavwb2qMi6nlQ==} + '@types/pdfkit@0.17.6': + resolution: {integrity: sha512-tIwzxk2uWKp0Cq9JIluQXJid77lYhF52EsIOwhsMF4iWLA6YneoBR1xVKYYdAysHuepUB0OX4tdwMiUDdGKmig==} + '@types/react-dom@19.2.3': resolution: {integrity: sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==} peerDependencies: @@ -2776,6 +2949,10 @@ packages: bare-url@2.4.2: resolution: {integrity: sha512-/9a2j4ac6ckpmAHvod/ob7x439OAHst/drc2Clnq+reRYd/ovddwcF4LfoxHyNk5AuGBnPg+HqFjmE/Zpq6v0A==} + base64-js@0.0.8: + resolution: {integrity: sha512-3XSA2cR/h/73EzlXXdU6YNycmYI7+kicTxks4eJg2g39biHR84slg2+des+p7iHYhbRg/udIS4TD53WabcOUkw==} + engines: {node: '>= 0.4'} + base64-js@1.5.1: resolution: {integrity: sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==} @@ -2893,6 +3070,9 @@ packages: browser-or-node@2.1.1: resolution: {integrity: sha512-8CVjaLJGuSKMVTxJ2DpBl5XnlNDiT4cQFeuCJJrvJmts9YrTZDizTX7PjC2s6W4x+MBGZeEY6dGMrF04/6Hgqg==} + browserify-zlib@0.2.0: + resolution: {integrity: sha512-Z942RysHXmJrhqk88FmKBVq/v5tqmSkDz7p54G/MGyjMnCFFnC79XWNbg+Vta8W6Wb2qtSZTSxIGkJrRpCFEiA==} + browserslist@4.28.1: resolution: {integrity: sha512-ZC5Bd0LgJXgwGqUknZY/vkUQ04r8NXnJZ3yYi4vDmSiZmC/pdSN0NbNRPxZpbtO4uAfDUAFffO8IZoM3Gj8IkA==} engines: {node: ^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7} @@ -4134,6 +4314,9 @@ packages: resolution: {integrity: sha512-cEiJEAEoIbWfCZYKWhVwFuvPX1gETRYPw6LlaTKoxD3s2AkXzkCjnp6h0V77ozyqj0jakteJ4YqDJT830+lVGw==} engines: {node: '>=14'} + js-md5@0.8.3: + resolution: {integrity: sha512-qR0HB5uP6wCuRMrWPTrkMaev7MJZwJuuw4fnwAzRgP4J4/F8RwtodOKpGp4XpqsLBFzzgqIO42efFAyz2Et6KQ==} + js-tokens@10.0.0: resolution: {integrity: sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q==} @@ -4286,6 +4469,9 @@ packages: resolution: {integrity: sha512-/vlFKAoH5Cgt3Ie+JLhRbwOsCQePABiU3tJ1egGvyQ+33R/vcwM2Zl2QR/LzjsBeItPt3oSVXapn+m4nQDvpzw==} engines: {node: '>=14'} + linebreak@1.1.0: + resolution: {integrity: sha512-MHp03UImeVhB7XZtjd0E4n6+3xr5Dq/9xI/5FptGk5FrbDR3zagPa2DS6U8ks/3HjbKWG9Q1M2ufOzxV2qLYSQ==} + lines-and-columns@1.2.4: resolution: {integrity: sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==} @@ -4696,6 +4882,9 @@ packages: pdf-lib@1.17.1: resolution: {integrity: sha512-V/mpyJAoTsN4cnP31vc0wfNA1+p20evqqnap0KLoRUN0Yk/p3wN52DOEsL4oBFcLdb76hlpKPtzJIgo67j/XLw==} + pdfkit@0.18.0: + resolution: {integrity: sha512-NvUwSDZ0eYEzqAiWwVQkRkjYUkZ48kcsHuCO31ykqPPIVkwoSDjDGiwIgHHNtsiwls3z3P/zy4q00hl2chg2Ug==} + peberminta@0.9.0: resolution: {integrity: sha512-XIxfHpEuSJbITd1H3EeQwpcZbTLHc+VVr8ANI9t5sit565tsI4/xK3KWTUFE2e6QiangUkh3B0jihzmGnNrRsQ==} @@ -4757,6 +4946,9 @@ packages: engines: {node: '>=18'} hasBin: true + png-js@1.1.0: + resolution: {integrity: sha512-PM/uYGzGdNSzqeOgly68+6wKQDL1SY0a/N+OEa/+br6LnHWOAJB0Npiamnodfq3jd2LS/i2fMeOKSAILjA+m5Q==} + possible-typed-array-names@1.1.0: resolution: {integrity: sha512-/+5VFTchJDoVj3bhoqi6UeymcD00DAwb1nJwamzPvHEszJ4FpF6SNNbUbOS8yI56qHzdV8eK0qEfOSiodkTdxg==} engines: {node: '>= 0.4'} @@ -5386,6 +5578,10 @@ packages: resolution: {integrity: sha512-haPVm1EkS9pgvHrQ/F3Xy+hgcuMV0Wm9vfIBSiwZ05k+xgb0PkBQpGsAA/oWdDobNaZTH5ppvHtzCFbnSEwHVw==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + sharp@0.34.5: + resolution: {integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + shebang-command@2.0.0: resolution: {integrity: sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==} engines: {node: '>=8'} @@ -6648,81 +6844,177 @@ snapshots: '@humanwhocodes/retry@0.4.3': {} + '@img/colour@1.1.0': {} + '@img/sharp-darwin-arm64@0.33.5': optionalDependencies: '@img/sharp-libvips-darwin-arm64': 1.0.4 optional: true + '@img/sharp-darwin-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-darwin-arm64': 1.2.4 + optional: true + '@img/sharp-darwin-x64@0.33.5': optionalDependencies: '@img/sharp-libvips-darwin-x64': 1.0.4 optional: true + '@img/sharp-darwin-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-darwin-x64': 1.2.4 + optional: true + '@img/sharp-libvips-darwin-arm64@1.0.4': optional: true + '@img/sharp-libvips-darwin-arm64@1.2.4': + optional: true + '@img/sharp-libvips-darwin-x64@1.0.4': optional: true + '@img/sharp-libvips-darwin-x64@1.2.4': + optional: true + '@img/sharp-libvips-linux-arm64@1.0.4': optional: true + '@img/sharp-libvips-linux-arm64@1.2.4': + optional: true + '@img/sharp-libvips-linux-arm@1.0.5': optional: true + '@img/sharp-libvips-linux-arm@1.2.4': + optional: true + + '@img/sharp-libvips-linux-ppc64@1.2.4': + optional: true + + '@img/sharp-libvips-linux-riscv64@1.2.4': + optional: true + '@img/sharp-libvips-linux-s390x@1.0.4': optional: true + '@img/sharp-libvips-linux-s390x@1.2.4': + optional: true + '@img/sharp-libvips-linux-x64@1.0.4': optional: true + '@img/sharp-libvips-linux-x64@1.2.4': + optional: true + '@img/sharp-libvips-linuxmusl-arm64@1.0.4': optional: true + '@img/sharp-libvips-linuxmusl-arm64@1.2.4': + optional: true + '@img/sharp-libvips-linuxmusl-x64@1.0.4': optional: true + '@img/sharp-libvips-linuxmusl-x64@1.2.4': + optional: true + '@img/sharp-linux-arm64@0.33.5': optionalDependencies: '@img/sharp-libvips-linux-arm64': 1.0.4 optional: true + '@img/sharp-linux-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-arm64': 1.2.4 + optional: true + '@img/sharp-linux-arm@0.33.5': optionalDependencies: '@img/sharp-libvips-linux-arm': 1.0.5 optional: true + '@img/sharp-linux-arm@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-arm': 1.2.4 + optional: true + + '@img/sharp-linux-ppc64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-ppc64': 1.2.4 + optional: true + + '@img/sharp-linux-riscv64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-riscv64': 1.2.4 + optional: true + '@img/sharp-linux-s390x@0.33.5': optionalDependencies: '@img/sharp-libvips-linux-s390x': 1.0.4 optional: true + '@img/sharp-linux-s390x@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-s390x': 1.2.4 + optional: true + '@img/sharp-linux-x64@0.33.5': optionalDependencies: '@img/sharp-libvips-linux-x64': 1.0.4 optional: true + '@img/sharp-linux-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-x64': 1.2.4 + optional: true + '@img/sharp-linuxmusl-arm64@0.33.5': optionalDependencies: '@img/sharp-libvips-linuxmusl-arm64': 1.0.4 optional: true + '@img/sharp-linuxmusl-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linuxmusl-arm64': 1.2.4 + optional: true + '@img/sharp-linuxmusl-x64@0.33.5': optionalDependencies: '@img/sharp-libvips-linuxmusl-x64': 1.0.4 optional: true + '@img/sharp-linuxmusl-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linuxmusl-x64': 1.2.4 + optional: true + '@img/sharp-wasm32@0.33.5': dependencies: '@emnapi/runtime': 1.9.0 optional: true + '@img/sharp-wasm32@0.34.5': + dependencies: + '@emnapi/runtime': 1.9.0 + optional: true + + '@img/sharp-win32-arm64@0.34.5': + optional: true + '@img/sharp-win32-ia32@0.33.5': optional: true + '@img/sharp-win32-ia32@0.34.5': + optional: true + '@img/sharp-win32-x64@0.33.5': optional: true + '@img/sharp-win32-x64@0.34.5': + optional: true + '@ioredis/commands@1.5.0': {} '@ioredis/commands@1.5.1': {} @@ -6816,8 +7108,12 @@ snapshots: '@next/swc-win32-x64-msvc@15.1.0': optional: true + '@noble/ciphers@1.3.0': {} + '@noble/ciphers@2.1.1': {} + '@noble/hashes@1.8.0': {} + '@noble/hashes@2.0.1': {} '@nodelib/fs.scandir@2.1.5': @@ -7747,6 +8043,10 @@ snapshots: dependencies: '@types/node': 22.19.15 + '@types/pdfkit@0.17.6': + dependencies: + '@types/node': 22.19.15 + '@types/react-dom@19.2.3(@types/react@19.2.14)': dependencies: '@types/react': 19.2.14 @@ -8283,6 +8583,8 @@ snapshots: dependencies: bare-path: 3.0.0 + base64-js@0.0.8: {} + base64-js@1.5.1: {} base64id@2.0.0: {} @@ -8363,6 +8665,10 @@ snapshots: browser-or-node@2.1.1: {} + browserify-zlib@0.2.0: + dependencies: + pako: 1.0.11 + browserslist@4.28.1: dependencies: baseline-browser-mapping: 2.10.8 @@ -9774,6 +10080,8 @@ snapshots: js-cookie@3.0.5: {} + js-md5@0.8.3: {} + js-tokens@10.0.0: {} js-tokens@4.0.0: {} @@ -9894,6 +10202,11 @@ snapshots: lilconfig@3.1.3: {} + linebreak@1.1.0: + dependencies: + base64-js: 0.0.8 + unicode-trie: 2.0.0 + lines-and-columns@1.2.4: {} linkify-it@5.0.0: @@ -10308,6 +10621,15 @@ snapshots: pako: 1.0.11 tslib: 1.14.1 + pdfkit@0.18.0: + dependencies: + '@noble/ciphers': 1.3.0 + '@noble/hashes': 1.8.0 + fontkit: 2.0.4 + js-md5: 0.8.3 + linebreak: 1.1.0 + png-js: 1.1.0 + peberminta@0.9.0: {} performance-now@2.1.0: {} @@ -10386,6 +10708,10 @@ snapshots: optionalDependencies: fsevents: 2.3.2 + png-js@1.1.0: + dependencies: + browserify-zlib: 0.2.0 + possible-typed-array-names@1.1.0: {} postcss-import@15.1.0(postcss@8.5.8): @@ -11175,6 +11501,37 @@ snapshots: '@img/sharp-win32-x64': 0.33.5 optional: true + sharp@0.34.5: + dependencies: + '@img/colour': 1.1.0 + detect-libc: 2.1.2 + semver: 7.7.4 + optionalDependencies: + '@img/sharp-darwin-arm64': 0.34.5 + '@img/sharp-darwin-x64': 0.34.5 + '@img/sharp-libvips-darwin-arm64': 1.2.4 + '@img/sharp-libvips-darwin-x64': 1.2.4 + '@img/sharp-libvips-linux-arm': 1.2.4 + '@img/sharp-libvips-linux-arm64': 1.2.4 + '@img/sharp-libvips-linux-ppc64': 1.2.4 + '@img/sharp-libvips-linux-riscv64': 1.2.4 + '@img/sharp-libvips-linux-s390x': 1.2.4 + '@img/sharp-libvips-linux-x64': 1.2.4 + '@img/sharp-libvips-linuxmusl-arm64': 1.2.4 + '@img/sharp-libvips-linuxmusl-x64': 1.2.4 + '@img/sharp-linux-arm': 0.34.5 + '@img/sharp-linux-arm64': 0.34.5 + '@img/sharp-linux-ppc64': 0.34.5 + '@img/sharp-linux-riscv64': 0.34.5 + '@img/sharp-linux-s390x': 0.34.5 + '@img/sharp-linux-x64': 0.34.5 + '@img/sharp-linuxmusl-arm64': 0.34.5 + '@img/sharp-linuxmusl-x64': 0.34.5 + '@img/sharp-wasm32': 0.34.5 + '@img/sharp-win32-arm64': 0.34.5 + '@img/sharp-win32-ia32': 0.34.5 + '@img/sharp-win32-x64': 0.34.5 + shebang-command@2.0.0: dependencies: shebang-regex: 3.0.0 diff --git a/scripts/dev-recommender-smoke.ts b/scripts/dev-recommender-smoke.ts new file mode 100644 index 0000000..b0b3584 --- /dev/null +++ b/scripts/dev-recommender-smoke.ts @@ -0,0 +1,52 @@ +/** + * Dev-only smoke check for the berth recommender. Resolves the first + * port-nimara interest (with desired dims set) and prints the top-N + * recommendations. + * + * pnpm tsx scripts/dev-recommender-smoke.ts + */ +import 'dotenv/config'; +import { eq, isNotNull, and } from 'drizzle-orm'; + +import { db } from '@/lib/db'; +import { ports } from '@/lib/db/schema/ports'; +import { interests } from '@/lib/db/schema/interests'; +import { recommendBerths } from '@/lib/services/berth-recommender.service'; + +async function main() { + const [port] = await db + .select({ id: ports.id }) + .from(ports) + .where(eq(ports.slug, 'port-nimara')) + .limit(1); + if (!port) throw new Error('port-nimara not found'); + + const [interest] = await db + .select({ id: interests.id }) + .from(interests) + .where(and(eq(interests.portId, port.id), isNotNull(interests.desiredLengthFt))) + .limit(1); + if (!interest) throw new Error('No interest with desired dims set'); + + console.log(`> Recommending berths for interest ${interest.id} on port ${port.id}…`); + const recs = await recommendBerths({ + interestId: interest.id, + portId: port.id, + }); + + console.log(`> ${recs.length} recommendations:`); + for (const r of recs) { + console.log( + ` ${r.mooringNumber.padEnd(5)} tier=${r.tier} fit=${r.fitScore} ` + + `${r.lengthFt}×${r.widthFt}×${r.draftFt} ft buf=${r.sizeBufferPct}% ` + + `${r.reasons.dimensional}; ${r.reasons.pipeline}`, + ); + } +} + +main() + .then(() => process.exit(0)) + .catch((err) => { + console.error(err); + process.exit(1); + }); diff --git a/scripts/import-berths-from-nocodb.ts b/scripts/import-berths-from-nocodb.ts new file mode 100644 index 0000000..b4117e2 --- /dev/null +++ b/scripts/import-berths-from-nocodb.ts @@ -0,0 +1,409 @@ +/** + * Idempotent NocoDB Berths → CRM `berths` import. + * + * Re-running picks up NocoDB additions/edits without clobbering CRM-side + * overrides: rows where `updated_at > last_imported_at` are treated as + * human-edited and skipped (use `--force` to override). Map Data JSON + * is validated and upserted into `berth_map_data` as a separate step. + * + * Usage: + * pnpm tsx scripts/import-berths-from-nocodb.ts --dry-run [--port-slug port-nimara] + * pnpm tsx scripts/import-berths-from-nocodb.ts --apply [--port-slug port-nimara] + * pnpm tsx scripts/import-berths-from-nocodb.ts --apply --force + * pnpm tsx scripts/import-berths-from-nocodb.ts --apply --update-snapshot + * + * Edge cases mitigated (see plan §14.1): + * - Mooring collisions : unique (port_id, mooring_number) on the table. + * - Concurrent runs : pg_advisory_xact_lock on a stable key. + * - Numeric-with-units : parseDecimalWithUnit() strips trailing units. + * - Metric drift : NocoDB metric formula columns are ignored; + * metric values are recomputed from imperial. + * - Map Data shape : zod-validated; failures are skipped silently + * rather than aborting the whole import. + * - Status enum : NocoDB display strings → CRM snake_case. + * - NocoDB row deleted : reported as "orphaned in CRM"; not auto-deleted. + */ + +import 'dotenv/config'; +import { eq, sql } from 'drizzle-orm'; +import { promises as fs } from 'node:fs'; +import path from 'node:path'; + +import { db } from '@/lib/db'; +import { ports } from '@/lib/db/schema/ports'; +import { berths, berthMapData } from '@/lib/db/schema/berths'; +import { fetchAllRows, loadNocoDbConfig, NOCO_TABLES } from '@/lib/dedup/nocodb-source'; +import { + buildPlan, + mapRow, + type Action, + type ImportedBerth, + type PlanEntry, + type ExistingBerthRow, +} from '@/lib/services/berth-import'; + +// ─── CLI ──────────────────────────────────────────────────────────────────── + +interface CliArgs { + dryRun: boolean; + apply: boolean; + portSlug: string; + force: boolean; + updateSnapshot: boolean; +} + +function parseArgs(argv: string[]): CliArgs { + const args: CliArgs = { + dryRun: false, + apply: false, + portSlug: 'port-nimara', + force: false, + updateSnapshot: false, + }; + for (let i = 0; i < argv.length; i += 1) { + const a = argv[i]!; + if (a === '--dry-run') args.dryRun = true; + else if (a === '--apply') args.apply = true; + else if (a === '--port-slug') args.portSlug = argv[++i] ?? 'port-nimara'; + else if (a === '--force') args.force = true; + else if (a === '--update-snapshot') args.updateSnapshot = true; + else if (a === '-h' || a === '--help') { + printHelp(); + process.exit(0); + } else { + console.error(`Unknown argument: ${a}`); + printHelp(); + process.exit(1); + } + } + if (!args.dryRun && !args.apply) { + console.error('Must specify either --dry-run or --apply.'); + printHelp(); + process.exit(1); + } + return args; +} + +function printHelp(): void { + console.log(`Usage: + pnpm tsx scripts/import-berths-from-nocodb.ts --dry-run [--port-slug ] + pnpm tsx scripts/import-berths-from-nocodb.ts --apply [--port-slug ] [--force] [--update-snapshot] + +Flags: + --dry-run Read NocoDB + diff vs CRM. No writes. + --apply Apply the plan to the DB. + --port-slug Target port slug (default: port-nimara). + --force Overwrite rows where CRM updated_at > last_imported_at. + --update-snapshot Rewrite src/lib/db/seed-data/berths.json after apply. + -h, --help Show this help. +`); +} + +// ─── Stable advisory lock key ─────────────────────────────────────────────── +// 64-bit BIGINT - first 4 bytes spell "BRTH" so it's grep-able in pg_locks. +const BERTH_IMPORT_LOCK_KEY = 0x4252544800000001n; + +// ─── Apply ────────────────────────────────────────────────────────────────── + +interface ApplyResult { + inserted: number; + updated: number; + skipped: number; + mapDataWritten: number; + warnings: string[]; +} + +async function apply( + portId: string, + plan: PlanEntry[], + orphans: ExistingBerthRow[], + importedAt: Date, +): Promise { + const result: ApplyResult = { + inserted: 0, + updated: 0, + skipped: 0, + mapDataWritten: 0, + warnings: [], + }; + for (const orphan of orphans) { + result.warnings.push( + `Orphan: CRM has mooring="${orphan.mooringNumber}" but NocoDB no longer does (id=${orphan.id})`, + ); + } + + await db.transaction(async (tx) => { + // Stable lock so two simultaneous --apply runs serialize. + await tx.execute(sql`SELECT pg_advisory_xact_lock(${BERTH_IMPORT_LOCK_KEY})`); + + for (const entry of plan) { + if (entry.action === 'skip-edited' || entry.action === 'noop') { + result.skipped += 1; + result.warnings.push(`Skipped ${entry.imported.mooringNumber}: ${entry.reason ?? 'no-op'}`); + continue; + } + const i = entry.imported; + const n = i.numerics; + const baseValues = { + portId, + mooringNumber: i.mooringNumber, + area: i.area, + status: i.status, + lengthFt: n.lengthFt != null ? String(n.lengthFt) : null, + widthFt: n.widthFt != null ? String(n.widthFt) : null, + draftFt: n.draftFt != null ? String(n.draftFt) : null, + lengthM: n.lengthM != null ? String(n.lengthM) : null, + widthM: n.widthM != null ? String(n.widthM) : null, + draftM: n.draftM != null ? String(n.draftM) : null, + widthIsMinimum: i.widthIsMinimum, + nominalBoatSize: n.nominalBoatSize != null ? String(n.nominalBoatSize) : null, + nominalBoatSizeM: n.nominalBoatSizeM != null ? String(n.nominalBoatSizeM) : null, + waterDepth: n.waterDepth != null ? String(n.waterDepth) : null, + waterDepthM: n.waterDepthM != null ? String(n.waterDepthM) : null, + waterDepthIsMinimum: i.waterDepthIsMinimum, + sidePontoon: i.sidePontoon, + powerCapacity: n.powerCapacity != null ? String(n.powerCapacity) : null, + voltage: n.voltage != null ? String(n.voltage) : null, + mooringType: i.mooringType, + cleatType: i.cleatType, + cleatCapacity: i.cleatCapacity, + bollardType: i.bollardType, + bollardCapacity: i.bollardCapacity, + access: i.access, + price: n.price != null ? String(n.price) : null, + priceCurrency: 'USD' as const, + bowFacing: i.bowFacing, + berthApproved: i.berthApproved, + statusOverrideMode: i.statusOverrideMode, + lastImportedAt: importedAt, + updatedAt: importedAt, + }; + + let berthId: string; + if (entry.action === 'insert') { + const [inserted] = await tx + .insert(berths) + .values({ ...baseValues, tenureType: 'permanent' }) + .returning({ id: berths.id }); + berthId = inserted!.id; + result.inserted += 1; + } else { + await tx.update(berths).set(baseValues).where(eq(berths.id, entry.existing!.id)); + berthId = entry.existing!.id; + result.updated += 1; + } + + if (i.mapData) { + const mapValues = { + berthId, + svgPath: i.mapData.path ?? null, + x: i.mapData.x != null ? String(i.mapData.x) : null, + y: i.mapData.y != null ? String(i.mapData.y) : null, + transform: i.mapData.transform ?? null, + fontSize: i.mapData.fontSize != null ? String(i.mapData.fontSize) : null, + updatedAt: importedAt, + }; + await tx + .insert(berthMapData) + .values(mapValues) + .onConflictDoUpdate({ + target: berthMapData.berthId, + set: { + svgPath: mapValues.svgPath, + x: mapValues.x, + y: mapValues.y, + transform: mapValues.transform, + fontSize: mapValues.fontSize, + updatedAt: importedAt, + }, + }); + result.mapDataWritten += 1; + } + } + }); + return result; +} + +// ─── Snapshot writer (for seed-data refresh) ──────────────────────────────── + +async function writeSnapshot(imported: ImportedBerth[]): Promise { + // Ordering: idx 0..4 available (small), 5..9 under_offer (medium), + // 10..11 sold (large), then everything else by mooring number. The + // first 12 indexes feed `seed-data.ts` interest/reservation stubs. + const sortByLength = (a: ImportedBerth, b: ImportedBerth) => + (a.numerics.lengthFt ?? 0) - (b.numerics.lengthFt ?? 0); + const available = imported + .filter((b) => b.status === 'available') + .sort(sortByLength) + .slice(0, 5); + const underOffer = imported + .filter((b) => b.status === 'under_offer') + .sort(sortByLength) + .slice(0, 5); + const sold = imported + .filter((b) => b.status === 'sold') + .sort((a, b) => -sortByLength(a, b)) + .slice(0, 2); + const featured = new Set([...available, ...underOffer, ...sold].map((b) => b.mooringNumber)); + const rest = imported + .filter((b) => !featured.has(b.mooringNumber)) + .sort((a, b) => a.mooringNumber.localeCompare(b.mooringNumber, 'en', { numeric: true })); + const ordered = [...available, ...underOffer, ...sold, ...rest]; + + const payload = ordered.map((b) => ({ + legacyId: b.legacyId, + mooringNumber: b.mooringNumber, + area: b.area, + status: b.status, + lengthFt: b.numerics.lengthFt, + widthFt: b.numerics.widthFt, + draftFt: b.numerics.draftFt, + lengthM: b.numerics.lengthM, + widthM: b.numerics.widthM, + draftM: b.numerics.draftM, + widthIsMinimum: b.widthIsMinimum, + nominalBoatSize: b.numerics.nominalBoatSize, + nominalBoatSizeM: b.numerics.nominalBoatSizeM, + waterDepth: b.numerics.waterDepth, + waterDepthM: b.numerics.waterDepthM, + waterDepthIsMinimum: b.waterDepthIsMinimum, + sidePontoon: b.sidePontoon, + powerCapacity: b.numerics.powerCapacity, + voltage: b.numerics.voltage, + mooringType: b.mooringType, + cleatType: b.cleatType, + cleatCapacity: b.cleatCapacity, + bollardType: b.bollardType, + bollardCapacity: b.bollardCapacity, + access: b.access, + price: b.numerics.price, + bowFacing: b.bowFacing, + berthApproved: b.berthApproved, + statusOverrideMode: b.statusOverrideMode, + })); + + const target = path.resolve(process.cwd(), 'src/lib/db/seed-data/berths.json'); + await fs.writeFile(target, JSON.stringify(payload, null, 2) + '\n', 'utf8'); + return target; +} + +// ─── Main ─────────────────────────────────────────────────────────────────── + +async function main(): Promise { + const args = parseArgs(process.argv.slice(2)); + const config = loadNocoDbConfig(); + + const [port] = await db + .select({ id: ports.id, slug: ports.slug }) + .from(ports) + .where(eq(ports.slug, args.portSlug)) + .limit(1); + if (!port) { + console.error(`No port found with slug "${args.portSlug}".`); + process.exit(1); + } + + console.log(`> Fetching NocoDB Berths…`); + const rows = await fetchAllRows(NOCO_TABLES.berths, config); + console.log(` fetched ${rows.length} rows from NocoDB`); + + const imported: ImportedBerth[] = []; + let skippedMalformed = 0; + for (const r of rows) { + const m = mapRow(r); + if (m) imported.push(m); + else skippedMalformed += 1; + } + if (skippedMalformed > 0) { + console.warn(` ${skippedMalformed} rows skipped (missing Mooring Number)`); + } + + // De-dup against any same-mooring twins surfacing from NocoDB + // (defensive — the Berths table is keyed on Mooring Number in NocoDB). + const seen = new Set(); + const dedup: ImportedBerth[] = []; + for (const b of imported) { + if (seen.has(b.mooringNumber)) { + console.warn(` duplicate mooring "${b.mooringNumber}" in NocoDB — keeping first`); + continue; + } + seen.add(b.mooringNumber); + dedup.push(b); + } + + console.log(`> Reading current CRM berths for port "${port.slug}"…`); + const existingRows = await db + .select({ + id: berths.id, + mooringNumber: berths.mooringNumber, + updatedAt: berths.updatedAt, + lastImportedAt: berths.lastImportedAt, + }) + .from(berths) + .where(eq(berths.portId, port.id)); + console.log(` ${existingRows.length} existing rows`); + + const existingByMooring = new Map(existingRows.map((r) => [r.mooringNumber, r])); + const { plan, orphans } = buildPlan(dedup, existingByMooring, args.force); + + const counts = plan.reduce( + (acc, e) => { + acc[e.action] += 1; + return acc; + }, + { insert: 0, update: 0, 'skip-edited': 0, noop: 0 } as Record, + ); + + console.log(`> Plan:`); + console.log(` insert : ${counts.insert}`); + console.log(` update : ${counts.update}`); + console.log(` skip-edited : ${counts['skip-edited']}`); + console.log(` no-op : ${counts.noop}`); + console.log(` orphans (CRM): ${orphans.length}`); + + if (counts['skip-edited'] > 0) { + console.log(` ↳ Skipped (CRM-edited; pass --force to overwrite):`); + for (const e of plan.filter((p) => p.action === 'skip-edited').slice(0, 10)) { + console.log(` - ${e.imported.mooringNumber} ${e.reason}`); + } + if (counts['skip-edited'] > 10) console.log(` …and ${counts['skip-edited'] - 10} more`); + } + if (orphans.length > 0) { + console.log(` ↳ Orphans (in CRM but missing from NocoDB):`); + for (const o of orphans.slice(0, 10)) console.log(` - ${o.mooringNumber}`); + if (orphans.length > 10) console.log(` …and ${orphans.length - 10} more`); + } + + // Snapshot write is independent of DB writes — even in --dry-run mode + // a rep may want to refresh the seed JSON to capture the latest NocoDB + // shape without committing to the DB import. The original gate dropped + // this silently when --dry-run was passed; audit caught it. + if (args.updateSnapshot) { + const written = await writeSnapshot(dedup); + console.log(`> Wrote ${dedup.length} rows to ${path.relative(process.cwd(), written)}`); + } + + if (args.dryRun) { + console.log(`\n[dry-run] no DB writes performed.`); + return; + } + + console.log(`> Applying…`); + const result = await apply(port.id, plan, orphans, new Date()); + console.log(` inserted : ${result.inserted}`); + console.log(` updated : ${result.updated}`); + console.log(` skipped : ${result.skipped}`); + console.log(` map data writes : ${result.mapDataWritten}`); + if (result.warnings.length) { + console.log(` warnings :`); + for (const w of result.warnings.slice(0, 20)) console.log(` - ${w}`); + if (result.warnings.length > 20) console.log(` …and ${result.warnings.length - 20} more`); + } +} + +main() + .then(() => process.exit(0)) + .catch((err: unknown) => { + console.error(err); + process.exit(1); + }); diff --git a/scripts/load-berths-to-port-nimara.ts b/scripts/load-berths-to-port-nimara.ts deleted file mode 100644 index 6b70344..0000000 --- a/scripts/load-berths-to-port-nimara.ts +++ /dev/null @@ -1,126 +0,0 @@ -/** - * One-shot: load the 117-berth NocoDB snapshot into the port-nimara - * port, skipping any moorings that already exist. - * - * The original seed only seeded 12 hand-rolled berths into port-nimara - * (A-01..D-03), but the migration's interest rows reference moorings - * across A-01..E-18. This loads the full set so interest→berth links - * resolve cleanly on the next migration run. - */ -import 'dotenv/config'; -import { eq, and, sql, inArray } from 'drizzle-orm'; - -import { db } from '@/lib/db'; -import { ports } from '@/lib/db/schema/ports'; -import { berths } from '@/lib/db/schema/berths'; -import berthSnapshot from '@/lib/db/seed-data/berths.json'; - -interface SnapshotBerth { - mooringNumber: string; - area: string; - status: 'available' | 'under_offer' | 'sold'; - lengthFt: number | null; - widthFt: number | null; - draftFt: number | null; - lengthM: number | null; - widthM: number | null; - draftM: number | null; - widthIsMinimum: boolean; - nominalBoatSize: number | null; - nominalBoatSizeM: number | null; - waterDepth: number | null; - waterDepthM: number | null; - waterDepthIsMinimum: boolean; - sidePontoon: string | null; - powerCapacity: number | null; - voltage: number | null; - mooringType: string | null; - cleatType: string | null; - cleatCapacity: string | null; - bollardType: string | null; - bollardCapacity: string | null; - access: string | null; - price: number | null; - bowFacing: string | null; - berthApproved: boolean; - statusOverrideMode: string | null; -} - -async function main() { - const [port] = await db - .select({ id: ports.id }) - .from(ports) - .where(eq(ports.slug, 'port-nimara')) - .limit(1); - if (!port) throw new Error('port-nimara not found'); - - const snapshot = berthSnapshot as unknown as SnapshotBerth[]; - - // Existing moorings — skip these. - const existingRows = await db - .select({ mooringNumber: berths.mooringNumber }) - .from(berths) - .where(eq(berths.portId, port.id)); - const existingMoorings = new Set(existingRows.map((r) => r.mooringNumber)); - - const toInsert = snapshot.filter((b) => !existingMoorings.has(b.mooringNumber)); - console.log( - `Snapshot: ${snapshot.length} berths, existing in port-nimara: ${existingRows.length}, to insert: ${toInsert.length}`, - ); - - if (toInsert.length === 0) { - console.log('Nothing to do.'); - return; - } - - const inserted = await db - .insert(berths) - .values( - toInsert.map((b) => ({ - portId: port.id, - mooringNumber: b.mooringNumber, - area: b.area, - status: b.status, - lengthFt: b.lengthFt != null ? String(b.lengthFt) : null, - widthFt: b.widthFt != null ? String(b.widthFt) : null, - draftFt: b.draftFt != null ? String(b.draftFt) : null, - lengthM: b.lengthM != null ? String(b.lengthM) : null, - widthM: b.widthM != null ? String(b.widthM) : null, - draftM: b.draftM != null ? String(b.draftM) : null, - widthIsMinimum: b.widthIsMinimum, - nominalBoatSize: b.nominalBoatSize != null ? String(b.nominalBoatSize) : null, - nominalBoatSizeM: b.nominalBoatSizeM != null ? String(b.nominalBoatSizeM) : null, - waterDepth: b.waterDepth != null ? String(b.waterDepth) : null, - waterDepthM: b.waterDepthM != null ? String(b.waterDepthM) : null, - waterDepthIsMinimum: b.waterDepthIsMinimum, - sidePontoon: b.sidePontoon, - powerCapacity: b.powerCapacity != null ? String(b.powerCapacity) : null, - voltage: b.voltage != null ? String(b.voltage) : null, - mooringType: b.mooringType, - cleatType: b.cleatType, - cleatCapacity: b.cleatCapacity, - bollardType: b.bollardType, - bollardCapacity: b.bollardCapacity, - access: b.access, - price: b.price != null ? String(b.price) : null, - priceCurrency: 'USD', - bowFacing: b.bowFacing, - berthApproved: b.berthApproved, - statusOverrideMode: b.statusOverrideMode, - tenureType: 'permanent' as const, - })), - ) - .returning({ id: berths.id, mooringNumber: berths.mooringNumber }); - - console.log(`Inserted ${inserted.length} berths.`); - - // Suppress unused-import warning if eslint is strict. - void and; - void sql; - void inArray; -} - -main().catch((e) => { - console.error(e); - process.exit(1); -}); diff --git a/scripts/migrate-storage.ts b/scripts/migrate-storage.ts new file mode 100644 index 0000000..8b948f7 --- /dev/null +++ b/scripts/migrate-storage.ts @@ -0,0 +1,29 @@ +/** + * Storage backend migration CLI — see §4.7a + §14.9a of + * docs/berth-recommender-and-pdf-plan.md. + * + * pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem [--dry-run] + * pnpm tsx scripts/migrate-storage.ts --from filesystem --to s3 + * + * The actual migration logic lives in `src/lib/storage/migrate.ts` so the + * admin UI's "Switch backend" button can run the exact same code path. This + * file is a thin CLI wrapper. + */ + +import { logger } from '@/lib/logger'; +import { parseArgs, runMigration } from '@/lib/storage/migrate'; + +async function main(): Promise { + const args = parseArgs(process.argv.slice(2)); + logger.info({ args }, 'Starting storage migration'); + const result = await runMigration(args); + logger.info({ result }, 'Storage migration complete'); + console.log(JSON.stringify(result, null, 2)); + process.exit(0); +} + +main().catch((err) => { + logger.error({ err }, 'Storage migration failed'); + console.error(err); + process.exit(2); +}); diff --git a/scripts/smoke-test-redirect.ts b/scripts/smoke-test-redirect.ts index 1a374d1..a3dff87 100644 --- a/scripts/smoke-test-redirect.ts +++ b/scripts/smoke-test-redirect.ts @@ -32,14 +32,13 @@ async function main() { const nodemailer = await import('nodemailer'); const captured: Array<{ to: unknown; subject: unknown; from: unknown }> = []; const originalCreateTransport = nodemailer.default.createTransport; - // @ts-expect-error monkey-patch - nodemailer.default.createTransport = () => ({ + nodemailer.default.createTransport = (() => ({ // eslint-disable-next-line @typescript-eslint/no-explicit-any sendMail: async (msg: any) => { captured.push({ to: msg.to, subject: msg.subject, from: msg.from }); return { messageId: '', accepted: [msg.to], rejected: [] }; }, - }); + })) as unknown as typeof nodemailer.default.createTransport; // Now import sendEmail (gets the patched transporter). const { sendEmail } = await import('@/lib/email'); @@ -55,7 +54,6 @@ async function main() { await sendEmail(realClientEmail, realSubject, '

Body unused for this smoke.

'); // Restore the original transport (be a good citizen). - // @ts-expect-error monkey-patch nodemailer.default.createTransport = originalCreateTransport; console.log('[smoke] captured outbound message:'); diff --git a/src/app/(dashboard)/[portSlug]/admin/brochures/page.tsx b/src/app/(dashboard)/[portSlug]/admin/brochures/page.tsx new file mode 100644 index 0000000..03ef6f5 --- /dev/null +++ b/src/app/(dashboard)/[portSlug]/admin/brochures/page.tsx @@ -0,0 +1,21 @@ +import { PageHeader } from '@/components/shared/page-header'; +import { BrochuresAdminPanel } from '@/components/admin/brochures-admin-panel'; + +/** + * Per-port admin page for managing brochures (Phase 7 §5.8). + * + * Lists brochures, lets per-port admins upload new versions via direct-to- + * storage presigned URLs (so the 20MB+ file never traverses Next.js's + * body-size limit — see §11.1), and toggle the default flag. + */ +export default function BrochuresAdminPage() { + return ( +
+ + +
+ ); +} diff --git a/src/app/(dashboard)/[portSlug]/admin/email/page.tsx b/src/app/(dashboard)/[portSlug]/admin/email/page.tsx index 4477d4b..8362687 100644 --- a/src/app/(dashboard)/[portSlug]/admin/email/page.tsx +++ b/src/app/(dashboard)/[portSlug]/admin/email/page.tsx @@ -3,6 +3,7 @@ import { type SettingFieldDef, } from '@/components/admin/shared/settings-form-card'; import { PageHeader } from '@/components/shared/page-header'; +import { SalesEmailConfigCard } from '@/components/admin/sales-email-config-card'; const FIELDS: SettingFieldDef[] = [ { @@ -94,6 +95,7 @@ export default function EmailSettingsPage() { description="Optional per-port SMTP credentials. Leave blank to use the global env defaults." fields={FIELDS.slice(5)} /> + ); } diff --git a/src/app/(dashboard)/[portSlug]/admin/page.tsx b/src/app/(dashboard)/[portSlug]/admin/page.tsx index baa3591..97bd3ce 100644 --- a/src/app/(dashboard)/[portSlug]/admin/page.tsx +++ b/src/app/(dashboard)/[portSlug]/admin/page.tsx @@ -180,6 +180,13 @@ const GROUPS: AdminGroup[] = [ description: 'Database snapshots and on-demand exports.', icon: HardDrive, }, + { + href: 'storage', + label: 'Storage Backend', + description: + 'Choose between S3-compatible object store or local filesystem; migrate between them.', + icon: HardDrive, + }, ], }, { diff --git a/src/app/(dashboard)/[portSlug]/admin/storage/page.tsx b/src/app/(dashboard)/[portSlug]/admin/storage/page.tsx new file mode 100644 index 0000000..c0c2f53 --- /dev/null +++ b/src/app/(dashboard)/[portSlug]/admin/storage/page.tsx @@ -0,0 +1,7 @@ +import { StorageAdminPanel } from '@/components/admin/storage-admin-panel'; + +export const dynamic = 'force-dynamic'; + +export default function StorageAdminPage() { + return ; +} diff --git a/src/app/(dashboard)/[portSlug]/expenses/scan/page.tsx b/src/app/(dashboard)/[portSlug]/expenses/scan/page.tsx index 825c911..38608a4 100644 --- a/src/app/(dashboard)/[portSlug]/expenses/scan/page.tsx +++ b/src/app/(dashboard)/[portSlug]/expenses/scan/page.tsx @@ -3,7 +3,7 @@ import { useEffect, useRef, useState } from 'react'; import { useParams, useRouter } from 'next/navigation'; import { useMutation } from '@tanstack/react-query'; -import { Camera, Loader2, ScanLine, Upload } from 'lucide-react'; +import { Camera, Loader2, ScanLine, Upload, X } from 'lucide-react'; import { useMobileChrome } from '@/components/layout/mobile/mobile-layout-provider'; @@ -30,6 +30,11 @@ interface ScanResult { confidence: number; } +interface UploadedFileMeta { + id: string; + filename: string; +} + export default function ScanReceiptPage() { const params = useParams<{ portSlug: string }>(); const router = useRouter(); @@ -38,6 +43,13 @@ export default function ScanReceiptPage() { const cameraInputRef = useRef(null); const [scanResult, setScanResult] = useState(null); const [previewUrl, setPreviewUrl] = useState(null); + // After OCR succeeds we also upload the receipt to /api/v1/files/upload + // so the expense links to the actual image. The legacy scanner skipped + // this step and saved expenses without their receipt — which silently + // disqualified them from parent-company reimbursement (the warning the + // PDF export now surfaces). + const [uploadedFile, setUploadedFile] = useState(null); + const [pendingFile, setPendingFile] = useState(null); const { setChrome } = useMobileChrome(); useEffect(() => { @@ -74,6 +86,29 @@ export default function ScanReceiptPage() { }, }); + // Uploads the receipt image to /api/v1/files/upload (category=receipt) + // so the new expense row can link to it via receiptFileIds. Runs in + // parallel with the OCR scan so the rep can keep editing fields while + // the upload completes. + const uploadMutation = useMutation({ + mutationFn: async (file: File): Promise => { + const formData = new FormData(); + formData.append('file', file); + formData.append('category', 'receipt'); + const res = await fetch('/api/v1/files/upload', { + method: 'POST', + body: formData, + credentials: 'include', + }); + if (!res.ok) throw new Error('Receipt upload failed'); + const json = (await res.json()) as { data: { id: string; filename: string } }; + return { id: json.data.id, filename: json.data.filename }; + }, + onSuccess: (meta) => { + setUploadedFile(meta); + }, + }); + const saveMutation = useMutation({ mutationFn: () => apiFetch('/api/v1/expenses', { @@ -85,6 +120,9 @@ export default function ScanReceiptPage() { category: category || undefined, expenseDate: date ? new Date(date) : new Date(), paymentStatus: 'unpaid', + receiptFileIds: uploadedFile ? [uploadedFile.id] : undefined, + // The scanner path always has a receipt (we wouldn't have OCR'd + // it otherwise), so we never need the no-receipt flag here. }, }), onSuccess: () => { @@ -95,12 +133,32 @@ export default function ScanReceiptPage() { function handleFileChange(e: React.ChangeEvent) { const file = e.target.files?.[0]; if (!file) return; - + setPendingFile(file); const url = URL.createObjectURL(file); setPreviewUrl(url); + // Kick off OCR scan + storage upload concurrently. The two are + // independent server calls and the rep is staring at the preview + // while both run. scanMutation.mutate(file); + uploadMutation.mutate(file); } + function handleClearReceipt() { + if (previewUrl) URL.revokeObjectURL(previewUrl); + setPreviewUrl(null); + setUploadedFile(null); + setPendingFile(null); + setScanResult(null); + // Reset in-flight mutations so a late onSuccess doesn't repopulate + // the form against an already-cleared UI (audit finding: stale + // receipt could land on the next Save). + scanMutation.reset(); + uploadMutation.reset(); + if (fileInputRef.current) fileInputRef.current.value = ''; + if (cameraInputRef.current) cameraInputRef.current.value = ''; + } + void pendingFile; + return (
@@ -119,18 +177,45 @@ export default function ScanReceiptPage() { {previewUrl ? ( -
fileInputRef.current?.click()} - > - Receipt preview +
+
+ Receipt preview + +
+
+ {uploadMutation.isPending && ( + + Uploading receipt… + + )} + {uploadedFile && ( + + Receipt uploaded ({uploadedFile.filename}) + + )} + {uploadMutation.isError && ( + + Receipt upload failed — save will still create the expense without an image. + + )} +
) : (
+ {/* Camera button — available on mobile devices that surface the + built-in capture flow when an `image/*` input has the + `capture` attribute. Hidden on desktop where it's a no-op. */} + {/* File picker — works on every platform. Phrased so the copy + fits both mobile (library/files) and desktop (drag and drop). */}

- JPEG, PNG, WebP up to 10MB + JPEG, PNG, HEIC, WebP up to 10 MB +

+

+ Have many receipts?{' '} + + Bulk upload → +

)} + {/* `image/*` is the broadest accept — includes HEIC on iOS, + JPEG/PNG/WebP everywhere. The capture attribute on the second + input invokes the native camera flow on mobile. */} @@ -264,10 +363,20 @@ export default function ScanReceiptPage() {
diff --git a/src/app/api/public/berths/[mooringNumber]/route.ts b/src/app/api/public/berths/[mooringNumber]/route.ts new file mode 100644 index 0000000..1b6b1f2 --- /dev/null +++ b/src/app/api/public/berths/[mooringNumber]/route.ts @@ -0,0 +1,108 @@ +import { NextResponse } from 'next/server'; +import { and, eq, isNull } from 'drizzle-orm'; + +import { db } from '@/lib/db'; +import { ports } from '@/lib/db/schema/ports'; +import { berths, berthMapData } from '@/lib/db/schema/berths'; +import { interestBerths, interests } from '@/lib/db/schema/interests'; +import { logger } from '@/lib/logger'; +import { toPublicBerth } from '@/lib/services/public-berths'; + +/** + * GET /api/public/berths/[mooringNumber] + * + * Single-berth lookup for the public website's `/berths/[number]` + * page. Mooring numbers are matched against the canonical bare form + * ("A1", "B12") - Phase 0 normalized the entire CRM dataset. + */ + +// Hard-coded allowlist for the public read-only feed. Adding a port here +// is a deliberate decision (not silent enumeration via ?portSlug=), so a +// future private tenant can't be exposed by accident. +const PUBLIC_PORT_SLUGS = new Set(['port-nimara']); +const DEFAULT_PUBLIC_PORT_SLUG = 'port-nimara'; +const RESPONSE_HEADERS = { + 'cache-control': 'public, s-maxage=300, stale-while-revalidate=60', + 'content-type': 'application/json; charset=utf-8', +}; + +const MOORING_PATTERN = /^[A-Z]+\d+$/; + +export async function GET( + request: Request, + ctx: { params: Promise<{ mooringNumber: string }> }, +): Promise { + const { mooringNumber } = await ctx.params; + const url = new URL(request.url); + const requestedSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG; + if (!PUBLIC_PORT_SLUGS.has(requestedSlug)) { + return NextResponse.json( + { error: 'port is not part of the public berths feed', portSlug: requestedSlug }, + { status: 404, headers: { 'cache-control': 'no-store' } }, + ); + } + const portSlug = requestedSlug; + + // Reject obviously malformed mooring numbers up front so cache poisoning + // / random-URL probing returns 400 rather than 404 (saves a DB hit). + if (!MOORING_PATTERN.test(mooringNumber)) { + return NextResponse.json( + { error: 'invalid mooring number', mooringNumber }, + { status: 400, headers: { 'cache-control': 'no-store' } }, + ); + } + + const [port] = await db + .select({ id: ports.id }) + .from(ports) + .where(eq(ports.slug, portSlug)) + .limit(1); + if (!port) { + return NextResponse.json( + { error: 'port not found', portSlug }, + { status: 404, headers: { 'cache-control': 'no-store' } }, + ); + } + + const [berth] = await db + .select() + .from(berths) + .where(and(eq(berths.portId, port.id), eq(berths.mooringNumber, mooringNumber))) + .limit(1); + + if (!berth) { + return NextResponse.json( + { error: 'berth not found', mooringNumber }, + { status: 404, headers: { 'cache-control': 'no-store' } }, + ); + } + + const [mapData, specificInterestRows] = await Promise.all([ + db.select().from(berthMapData).where(eq(berthMapData.berthId, berth.id)).limit(1), + db + .select({ berthId: interestBerths.berthId }) + .from(interestBerths) + .innerJoin(interests, eq(interests.id, interestBerths.interestId)) + .where( + and( + eq(interestBerths.berthId, berth.id), + eq(interestBerths.isSpecificInterest, true), + isNull(interests.archivedAt), + // Closed deals (won/lost/cancelled) don't promote to "Under + // Offer" - won flows through berths.status='sold' handled in + // derivePublicStatus; lost/cancelled means back on the market. + isNull(interests.outcome), + ), + ) + .limit(1), + ]); + + const out = toPublicBerth(berth, mapData[0] ?? null, specificInterestRows.length > 0); + + if (out.Status !== 'Available' && out.Status !== 'Under Offer' && out.Status !== 'Sold') { + logger.error({ berthId: berth.id, status: out.Status }, 'Public berth status out of range'); + return NextResponse.json({ error: 'internal' }, { status: 500 }); + } + + return new Response(JSON.stringify(out), { headers: RESPONSE_HEADERS, status: 200 }); +} diff --git a/src/app/api/public/berths/route.ts b/src/app/api/public/berths/route.ts new file mode 100644 index 0000000..8c8f27c --- /dev/null +++ b/src/app/api/public/berths/route.ts @@ -0,0 +1,157 @@ +import { NextResponse } from 'next/server'; +import { and, eq, inArray, isNull } from 'drizzle-orm'; + +import { db } from '@/lib/db'; +import { ports } from '@/lib/db/schema/ports'; +import { berths, berthMapData } from '@/lib/db/schema/berths'; +import { interestBerths, interests } from '@/lib/db/schema/interests'; +import { logger } from '@/lib/logger'; +import { toPublicBerth, type PublicBerth } from '@/lib/services/public-berths'; + +/** + * GET /api/public/berths + * + * Public-website data feed. Returns the full berth list for the public- + * facing port (default: port-nimara) in the same JSON shape NocoDB + * returned, so the website's existing `getBerths()` swap is a one-line + * URL change (plan §4.5 + §7.3). + * + * Auth: none. The endpoint is read-only and exposes only the explicit + * field allowlist defined in `toPublicBerth`. + * + * Caching: `s-maxage=300, stale-while-revalidate=60` matches the + * website's existing 5-minute TTL behaviour against NocoDB. Edge/CDN + * caches honour these headers; the Next.js fetch cache also picks + * them up. + */ + +// Hard-coded allowlist for the public read-only feed. Adding a port here +// is a deliberate decision (not silent enumeration via ?portSlug=), so a +// future private tenant can't be exposed by accident. +const PUBLIC_PORT_SLUGS = new Set(['port-nimara']); +const DEFAULT_PUBLIC_PORT_SLUG = 'port-nimara'; + +const RESPONSE_HEADERS = { + 'cache-control': 'public, s-maxage=300, stale-while-revalidate=60', + 'content-type': 'application/json; charset=utf-8', +}; + +interface ListResponse { + list: PublicBerth[]; + pageInfo: { + totalRows: number; + page: 1; + pageSize: number; + isFirstPage: true; + isLastPage: true; + }; +} + +export async function GET(request: Request): Promise { + const url = new URL(request.url); + const requestedSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG; + if (!PUBLIC_PORT_SLUGS.has(requestedSlug)) { + return NextResponse.json( + { error: 'port is not part of the public berths feed', portSlug: requestedSlug }, + { status: 404, headers: { 'cache-control': 'no-store' } }, + ); + } + const portSlug = requestedSlug; + + const [port] = await db + .select({ id: ports.id }) + .from(ports) + .where(eq(ports.slug, portSlug)) + .limit(1); + if (!port) { + return NextResponse.json( + { error: 'port not found', portSlug }, + { status: 404, headers: { 'cache-control': 'no-store' } }, + ); + } + + // 1. Active berths for the port (archived would be an explicit field + // once we add one - today we don't have an archived_at on berths, + // so we surface every row except those marked status='sold' on + // request? No: §4.5 says "filters out berths archived in CRM". + // The current schema has no archived flag for berths, so this is + // a no-op today; future archive flag plugs in here. + const berthRows = await db.select().from(berths).where(eq(berths.portId, port.id)); + + if (berthRows.length === 0) { + return jsonResponse({ list: [], pageInfo: emptyPageInfo() }); + } + + const berthIds = berthRows.map((b) => b.id); + + // 2. Bulk-fetch map_data + the "has specific-interest link" flag. + const [mapRows, specificInterestRows] = await Promise.all([ + db.select().from(berthMapData).where(inArray(berthMapData.berthId, berthIds)), + db + .selectDistinct({ berthId: interestBerths.berthId }) + .from(interestBerths) + .innerJoin(interests, eq(interests.id, interestBerths.interestId)) + .where( + and( + inArray(interestBerths.berthId, berthIds), + eq(interestBerths.isSpecificInterest, true), + isNull(interests.archivedAt), + // Don't promote a berth to "Under Offer" when the only specific- + // interest link is a closed deal. `won` flips happen via + // berths.status='sold' (handled in derivePublicStatus). Lost/ + // cancelled outcomes mean the berth is back on the market. + isNull(interests.outcome), + ), + ), + ]); + + const mapByBerth = new Map(mapRows.map((m) => [m.berthId, m])); + const specificInterestSet = new Set(specificInterestRows.map((r) => r.berthId)); + + const list = berthRows.map((b) => + toPublicBerth(b, mapByBerth.get(b.id) ?? null, specificInterestSet.has(b.id)), + ); + + // Validate the response enum before returning - any unknown status + // value would hit a 500 (per §14.8) rather than silently shipping + // invalid data downstream. + for (const row of list) { + if (row.Status !== 'Available' && row.Status !== 'Under Offer' && row.Status !== 'Sold') { + // Log just the identifying fields - never the full berth row, which + // includes price + amenity columns that don't belong in error logs. + logger.error( + { berthId: row.Id, mooringNumber: row['Mooring Number'], status: row.Status }, + 'Public berth status out of range', + ); + return NextResponse.json( + { error: 'internal', detail: 'berth status enum drift' }, + { status: 500 }, + ); + } + } + + return jsonResponse({ + list, + pageInfo: { + totalRows: list.length, + page: 1, + pageSize: list.length, + isFirstPage: true, + isLastPage: true, + }, + }); +} + +function jsonResponse(body: ListResponse): Response { + return new Response(JSON.stringify(body), { headers: RESPONSE_HEADERS, status: 200 }); +} + +function emptyPageInfo() { + return { + totalRows: 0, + page: 1 as const, + pageSize: 0, + isFirstPage: true as const, + isLastPage: true as const, + }; +} diff --git a/src/app/api/public/health/route.ts b/src/app/api/public/health/route.ts new file mode 100644 index 0000000..6da7035 --- /dev/null +++ b/src/app/api/public/health/route.ts @@ -0,0 +1,25 @@ +import { NextResponse } from 'next/server'; + +import { env } from '@/lib/env'; + +/** + * GET /api/public/health + * + * Public-facing health probe. Used by the marketing-website server on + * startup to verify it's pointed at a CRM matching its own deployment + * env (plan §14.8 critical: prevent staging-website-talking-to-prod-CRM). + * + * Returns the CRM's `NODE_ENV` and `APP_URL` so the website can do a + * strict equality check before serving any request. + */ +export function GET(): Response { + return NextResponse.json( + { + status: 'ok', + env: env.NODE_ENV, + appUrl: env.APP_URL, + timestamp: new Date().toISOString(), + }, + { headers: { 'cache-control': 'no-store' } }, + ); +} diff --git a/src/app/api/public/interests/route.ts b/src/app/api/public/interests/route.ts index 4d80889..479babd 100644 --- a/src/app/api/public/interests/route.ts +++ b/src/app/api/public/interests/route.ts @@ -4,7 +4,7 @@ import type { z } from 'zod'; import { db } from '@/lib/db'; import { withTransaction } from '@/lib/db/utils'; -import { interests } from '@/lib/db/schema/interests'; +import { interests, interestBerths } from '@/lib/db/schema/interests'; import { clients, clientContacts, clientAddresses } from '@/lib/db/schema/clients'; import { berths } from '@/lib/db/schema/berths'; import { ports } from '@/lib/db/schema/ports'; @@ -213,13 +213,17 @@ export async function POST(req: NextRequest) { } } - // 5. Create interest with yachtId wired up. + // 5. Create interest with yachtId wired up. The legacy + // interests.berth_id column has been replaced by the + // interest_berths junction (plan §3.4); when the public form + // resolves to a known berth we materialise it as a primary, + // specific-interest junction row in the same transaction so it + // rolls back together with the parent interest insert. const [newInterest] = await tx .insert(interests) .values({ portId, clientId, - berthId, yachtId, source: 'website', pipelineStage: 'open', @@ -227,6 +231,16 @@ export async function POST(req: NextRequest) { }) .returning(); + if (berthId) { + await tx.insert(interestBerths).values({ + interestId: newInterest!.id, + berthId, + isPrimary: true, + isSpecificInterest: true, + isInEoiBundle: false, + }); + } + return { interestId: newInterest!.id, clientId, diff --git a/src/app/api/storage/[token]/route.ts b/src/app/api/storage/[token]/route.ts new file mode 100644 index 0000000..24a8119 --- /dev/null +++ b/src/app/api/storage/[token]/route.ts @@ -0,0 +1,236 @@ +/** + * Filesystem-backend download proxy. + * + * The `FilesystemBackend.presignDownload(...)` returns a CRM-internal URL of + * the form `/api/storage/`. This route verifies the HMAC, + * checks expiry, enforces single-use via a short Redis cache, then streams + * the file out with explicit `Content-Type` + `Content-Disposition`. + * + * §14.9a mitigations exercised here: + * - HMAC verification (timingSafeEqual via filesystem.verifyProxyToken) + * - expiry check (token includes `e` epoch seconds) + * - single-use replay protection via short Redis SET-NX + * - Node runtime only (no edge); explicit headers so Next.js doesn't try to + * process the bytes (no image optimization, no streaming transforms) + */ + +import { createReadStream } from 'node:fs'; +import * as fs from 'node:fs/promises'; +import { Readable } from 'node:stream'; + +import { NextRequest, NextResponse } from 'next/server'; + +import { MAX_FILE_SIZE } from '@/lib/constants/file-validation'; +import { logger } from '@/lib/logger'; +import { redis } from '@/lib/redis'; +import { FilesystemBackend, getStorageBackend } from '@/lib/storage'; +import { verifyProxyToken } from '@/lib/storage/filesystem'; +import { isPdfMagic } from '@/lib/services/berth-pdf-parser'; + +export const runtime = 'nodejs'; +export const dynamic = 'force-dynamic'; + +// Replay-protection TTL must outlive the token itself, otherwise the +// dedup key expires and the same token can be redeemed twice. We pin it +// to the token's own expiry (clamped to a 25-day ceiling so a forged +// far-future token can't pollute Redis indefinitely). Send-out emails +// mint 24-hour tokens so the typical TTL is 24h + a small buffer. +const REPLAY_TTL_FLOOR_SECONDS = 60; // never below 60s (post-expiry tail). +const REPLAY_TTL_CEILING_SECONDS = 25 * 24 * 60 * 60; // 25 days. + +export async function GET( + _req: NextRequest, + ctx: { params: Promise<{ token: string }> }, +): Promise { + const { token } = await ctx.params; + + const backend = await getStorageBackend(); + if (!(backend instanceof FilesystemBackend)) { + return NextResponse.json( + { error: 'Storage proxy is only available in filesystem mode' }, + { status: 404 }, + ); + } + + const result = verifyProxyToken(token, backend.getHmacSecret()); + if (!result.ok) { + logger.warn({ reason: result.reason }, 'Storage proxy token rejected'); + return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 }); + } + const { payload } = result; + + // Single-use enforcement. SET NX with a TTL pinned to the token's own + // expiry so the dedup window never closes before the token does. Using + // the body half of the token as the dedup key (signature included + // would also work but body is enough — a reused token has the same body). + const replayKey = `storage:proxy:seen:${token.split('.')[0]}`; + const remainingSeconds = Math.max( + REPLAY_TTL_FLOOR_SECONDS, + Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60), + ); + const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX'); + if (setOk !== 'OK') { + logger.warn({ key: payload.k }, 'Storage proxy token replay rejected'); + return NextResponse.json({ error: 'Token already used' }, { status: 403 }); + } + + let absolutePath: string; + try { + absolutePath = backend.resolveKeyForProxy(payload.k); + } catch (err) { + logger.warn({ err, key: payload.k }, 'Storage proxy key resolution failed'); + return NextResponse.json({ error: 'Invalid key' }, { status: 400 }); + } + + let size: number; + try { + const stat = await fs.stat(absolutePath); + if (!stat.isFile()) { + return NextResponse.json({ error: 'Not found' }, { status: 404 }); + } + size = stat.size; + } catch (err) { + const code = (err as NodeJS.ErrnoException).code; + if (code === 'ENOENT') { + return NextResponse.json({ error: 'Not found' }, { status: 404 }); + } + throw err; + } + + // Convert the Node Readable into a Web ReadableStream for NextResponse. + const nodeStream = createReadStream(absolutePath); + const webStream = Readable.toWeb(nodeStream) as unknown as ReadableStream; + + const headers = new Headers(); + headers.set('Content-Type', payload.c ?? 'application/octet-stream'); + headers.set('Content-Length', String(size)); + if (payload.f) { + // RFC 5987 — quote the filename and provide a UTF-8 fallback. + const safe = payload.f.replace(/"/g, ''); + headers.set( + 'Content-Disposition', + `attachment; filename="${safe}"; filename*=UTF-8''${encodeURIComponent(payload.f)}`, + ); + } + headers.set('Cache-Control', 'private, no-store'); + headers.set('X-Content-Type-Options', 'nosniff'); + + return new NextResponse(webStream, { status: 200, headers }); +} + +/** + * Filesystem-backend upload proxy. The presigned URL minted by + * `FilesystemBackend.presignUpload` points here. Without this handler the + * browser-driven berth-PDF / brochure uploads would 405 in filesystem + * deployments — the entire pluggable-storage abstraction relied on the + * GET-only counterpart for downloads. + * + * Same token-verify + single-use replay protection as GET, plus: + * - Hard size cap (rejects oversized bodies before any disk I/O). + * - Magic-byte check when the issuer declared content-type=application/pdf + * (matches the §14.6 §6c/§7c invariant: every upload path verifies + * bytes server-side, not just at the client). + */ +export async function PUT( + req: NextRequest, + ctx: { params: Promise<{ token: string }> }, +): Promise { + const { token } = await ctx.params; + + const backend = await getStorageBackend(); + if (!(backend instanceof FilesystemBackend)) { + return NextResponse.json( + { error: 'Storage proxy is only available in filesystem mode' }, + { status: 404 }, + ); + } + + const result = verifyProxyToken(token, backend.getHmacSecret()); + if (!result.ok) { + logger.warn({ reason: result.reason }, 'Storage proxy upload token rejected'); + return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 }); + } + const { payload } = result; + + // Separate replay namespace from GET so a token can validly serve one + // upload AND one download (the issuer only mints the second), but a + // PUT cannot be replayed against itself. + const replayKey = `storage:proxy:put:${token.split('.')[0]}`; + const remainingSeconds = Math.max( + REPLAY_TTL_FLOOR_SECONDS, + Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60), + ); + const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX'); + if (setOk !== 'OK') { + logger.warn({ key: payload.k }, 'Storage proxy upload token replay rejected'); + return NextResponse.json({ error: 'Token already used' }, { status: 403 }); + } + + // Pre-flight size check via Content-Length so a malicious caller can't + // exhaust disk by streaming hundreds of MB before we look at the body. + const contentLengthHeader = req.headers.get('content-length'); + const contentLength = contentLengthHeader ? Number(contentLengthHeader) : NaN; + if (Number.isFinite(contentLength) && contentLength > MAX_FILE_SIZE) { + return NextResponse.json( + { error: `File exceeds ${MAX_FILE_SIZE} byte cap (Content-Length: ${contentLength})` }, + { status: 413 }, + ); + } + + if (!req.body) { + return NextResponse.json({ error: 'Empty body' }, { status: 400 }); + } + + // Read the body into a buffer with a hard cap. Filesystem deployments are + // small-tenant (single-node only — see FilesystemBackend boot guard) so + // 50 MB ceiling fits comfortably in heap; no streaming needed. + let buffer: Buffer; + try { + const chunks: Buffer[] = []; + let total = 0; + const reader = req.body.getReader(); + while (true) { + const { done, value } = await reader.read(); + if (done) break; + total += value.byteLength; + if (total > MAX_FILE_SIZE) { + try { + await reader.cancel(); + } catch { + /* ignore */ + } + return NextResponse.json( + { error: `File exceeds ${MAX_FILE_SIZE} byte cap` }, + { status: 413 }, + ); + } + chunks.push(Buffer.from(value)); + } + buffer = Buffer.concat(chunks); + } catch (err) { + logger.warn({ err, key: payload.k }, 'Storage proxy upload read failed'); + return NextResponse.json({ error: 'Upload read failed' }, { status: 400 }); + } + + // Magic-byte gate: when the token was minted with `c=application/pdf` + // (the only consumer today — berth PDFs + brochures), refuse anything + // that isn't actually a PDF. Mirrors the post-upload check in + // berth-pdf.service.ts so the two paths behave identically. + if (payload.c === 'application/pdf' && !isPdfMagic(buffer)) { + return NextResponse.json( + { error: 'Uploaded file failed PDF magic-byte check (does not start with %PDF-).' }, + { status: 400 }, + ); + } + + try { + await backend.put(payload.k, buffer, { + contentType: payload.c ?? 'application/octet-stream', + }); + } catch (err) { + logger.error({ err, key: payload.k }, 'Storage proxy upload write failed'); + return NextResponse.json({ error: 'Upload write failed' }, { status: 500 }); + } + + return NextResponse.json({ ok: true, key: payload.k, sizeBytes: buffer.length }, { status: 200 }); +} diff --git a/src/app/api/v1/admin/brochures/[id]/route.ts b/src/app/api/v1/admin/brochures/[id]/route.ts new file mode 100644 index 0000000..a650d10 --- /dev/null +++ b/src/app/api/v1/admin/brochures/[id]/route.ts @@ -0,0 +1,44 @@ +import { NextResponse } from 'next/server'; + +import { withAuth, withPermission } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { archiveBrochure, getBrochure, updateBrochure } from '@/lib/services/brochures.service'; +import { updateBrochureSchema } from '@/lib/validators/brochures'; + +export const GET = withAuth( + withPermission('admin', 'manage_settings', async (_req, ctx, params) => { + try { + const id = params.id!; + const data = await getBrochure(ctx.portId, id); + return NextResponse.json({ data }); + } catch (error) { + return errorResponse(error); + } + }), +); + +export const PATCH = withAuth( + withPermission('admin', 'manage_settings', async (req, ctx, params) => { + try { + const id = params.id!; + const input = await parseBody(req, updateBrochureSchema); + const data = await updateBrochure(ctx.portId, id, input); + return NextResponse.json({ data }); + } catch (error) { + return errorResponse(error); + } + }), +); + +export const DELETE = withAuth( + withPermission('admin', 'manage_settings', async (_req, ctx, params) => { + try { + const id = params.id!; + await archiveBrochure(ctx.portId, id); + return NextResponse.json({ success: true }); + } catch (error) { + return errorResponse(error); + } + }), +); diff --git a/src/app/api/v1/admin/brochures/[id]/versions/route.ts b/src/app/api/v1/admin/brochures/[id]/versions/route.ts new file mode 100644 index 0000000..a58224d --- /dev/null +++ b/src/app/api/v1/admin/brochures/[id]/versions/route.ts @@ -0,0 +1,68 @@ +import { NextResponse } from 'next/server'; + +import { withAuth, withPermission } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { + generateBrochureStorageKey, + registerBrochureVersion, +} from '@/lib/services/brochures.service'; +import { registerBrochureVersionSchema } from '@/lib/validators/brochures'; + +/** + * Two-step upload (per §11.1): + * 1. GET (no body) — server returns a fresh storage key + presigned URL. + * 2. POST (metadata) — after the browser PUTs to the URL, register the + * version row server-side. + * + * Direct-to-storage uploads bypass Next.js's body-size limit; the server + * never holds the 20MB+ payload in memory. + */ +import { getStorageBackend } from '@/lib/storage'; +import { getSalesContentConfig } from '@/lib/services/sales-email-config.service'; + +export const GET = withAuth( + withPermission('admin', 'manage_settings', async (_req, ctx, params) => { + try { + const id = params.id!; + const content = await getSalesContentConfig(ctx.portId); + const storageKey = await generateBrochureStorageKey(ctx.portId, id); + const storage = await getStorageBackend(); + const { url } = await storage.presignUpload(storageKey, { + expirySeconds: 900, + contentType: 'application/pdf', + }); + return NextResponse.json({ + data: { + storageKey, + uploadUrl: url, + method: 'PUT', + maxBytes: content.brochureMaxUploadMb * 1024 * 1024, + }, + }); + } catch (error) { + return errorResponse(error); + } + }), +); + +export const POST = withAuth( + withPermission('admin', 'manage_settings', async (req, ctx, params) => { + try { + const id = params.id!; + const input = await parseBody(req, registerBrochureVersionSchema); + const data = await registerBrochureVersion({ + portId: ctx.portId, + brochureId: id, + storageKey: input.storageKey, + fileName: input.fileName, + fileSizeBytes: input.fileSizeBytes, + contentSha256: input.contentSha256, + uploadedBy: ctx.userId, + }); + return NextResponse.json({ data }, { status: 201 }); + } catch (error) { + return errorResponse(error); + } + }), +); diff --git a/src/app/api/v1/admin/brochures/route.ts b/src/app/api/v1/admin/brochures/route.ts new file mode 100644 index 0000000..53c3562 --- /dev/null +++ b/src/app/api/v1/admin/brochures/route.ts @@ -0,0 +1,36 @@ +import { NextResponse } from 'next/server'; + +import { withAuth, withPermission } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { createBrochure, listBrochures } from '@/lib/services/brochures.service'; +import { createBrochureSchema } from '@/lib/validators/brochures'; + +export const GET = withAuth( + withPermission('admin', 'manage_settings', async (_req, ctx) => { + try { + const data = await listBrochures(ctx.portId, { includeArchived: true }); + return NextResponse.json({ data }); + } catch (error) { + return errorResponse(error); + } + }), +); + +export const POST = withAuth( + withPermission('admin', 'manage_settings', async (req, ctx) => { + try { + const input = await parseBody(req, createBrochureSchema); + const data = await createBrochure({ + portId: ctx.portId, + label: input.label, + description: input.description ?? null, + isDefault: input.isDefault, + createdBy: ctx.userId, + }); + return NextResponse.json({ data }, { status: 201 }); + } catch (error) { + return errorResponse(error); + } + }), +); diff --git a/src/app/api/v1/admin/email/sales-config/route.ts b/src/app/api/v1/admin/email/sales-config/route.ts new file mode 100644 index 0000000..4ddc45b --- /dev/null +++ b/src/app/api/v1/admin/email/sales-config/route.ts @@ -0,0 +1,74 @@ +import { NextResponse } from 'next/server'; + +import { withAuth, withPermission } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { + getSalesEmailConfig, + getSalesImapConfig, + getSalesContentConfig, + redactSalesConfigForResponse, + updateSalesEmailConfig, +} from '@/lib/services/sales-email-config.service'; +import { updateSalesEmailConfigSchema } from '@/lib/validators/sales-email-config'; + +/** + * GET /api/v1/admin/email/sales-config + * + * Returns the redacted view of the sales-email config. Per §14.10 + * reps can't see the decrypted password — the response only carries + * `*PassIsSet` boolean markers via `redactSalesConfigForResponse`. + * + * Today this endpoint is admin-only because it's consumed only by the + * admin UI panel (`src/components/admin/sales-email-config-card.tsx`). + * A future rep-facing surface that needs the from-address or body + * templates can split into a separate `/email/sales-config/preview` + * endpoint scoped to `email.view` — keeping the admin endpoint locked + * to `manage_settings` avoids accidentally widening secret-adjacent + * surfaces (e.g. the SMTP host name itself can be a leak vector). + */ +export const GET = withAuth( + withPermission('admin', 'manage_settings', async (_req, ctx) => { + try { + const [email, imap, content] = await Promise.all([ + getSalesEmailConfig(ctx.portId), + getSalesImapConfig(ctx.portId), + getSalesContentConfig(ctx.portId), + ]); + const redacted = redactSalesConfigForResponse(email, imap, content); + return NextResponse.json({ data: redacted }); + } catch (error) { + return errorResponse(error); + } + }), +); + +/** + * PATCH /api/v1/admin/email/sales-config + * + * Per-port admin only. Encrypts SMTP/IMAP passwords via AES-256-GCM before + * storage; the API never returns decrypted secrets (mirror enforcement on + * the GET handler). + */ +export const PATCH = withAuth( + withPermission('admin', 'manage_settings', async (req, ctx) => { + try { + const input = await parseBody(req, updateSalesEmailConfigSchema); + await updateSalesEmailConfig(ctx.portId, input, { + userId: ctx.userId, + portId: ctx.portId, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + // Return the freshly-redacted view so the UI can re-render. + const [email, imap, content] = await Promise.all([ + getSalesEmailConfig(ctx.portId), + getSalesImapConfig(ctx.portId), + getSalesContentConfig(ctx.portId), + ]); + return NextResponse.json({ data: redactSalesConfigForResponse(email, imap, content) }); + } catch (error) { + return errorResponse(error); + } + }), +); diff --git a/src/app/api/v1/admin/storage/migrate/route.ts b/src/app/api/v1/admin/storage/migrate/route.ts new file mode 100644 index 0000000..4b00fa6 --- /dev/null +++ b/src/app/api/v1/admin/storage/migrate/route.ts @@ -0,0 +1,40 @@ +/** + * Admin-triggered storage migration. Same code path as `scripts/migrate-storage.ts` + * (both delegate to `runMigration()` in `@/lib/storage/migrate`). Body: + * { from: 's3'|'filesystem', to: 's3'|'filesystem', dryRun?: boolean } + * + * Super-admin only. The `/[portSlug]/admin` segment is already gated; this + * route enforces the same constraint defensively. + */ + +import { NextResponse } from 'next/server'; +import { z } from 'zod'; + +import { withAuth } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse, ForbiddenError } from '@/lib/errors'; +import { runMigration } from '@/lib/storage/migrate'; + +const schema = z.object({ + from: z.enum(['s3', 'filesystem']), + to: z.enum(['s3', 'filesystem']), + dryRun: z.boolean().default(false), +}); + +export const runtime = 'nodejs'; + +export const POST = withAuth(async (req, ctx) => { + try { + if (!ctx.isSuperAdmin) { + throw new ForbiddenError('Super admin only'); + } + const body = await parseBody(req, schema); + if (body.from === body.to) { + return NextResponse.json({ error: 'from and to must differ' }, { status: 400 }); + } + const result = await runMigration({ ...body, userId: ctx.userId }); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/admin/storage/route.ts b/src/app/api/v1/admin/storage/route.ts new file mode 100644 index 0000000..5413d95 --- /dev/null +++ b/src/app/api/v1/admin/storage/route.ts @@ -0,0 +1,72 @@ +/** + * Admin storage status + connection test. Super-admin only. + * + * GET /api/v1/admin/storage — current backend + capacity stats + * POST /api/v1/admin/storage/test — exercise list/put/get/delete on s3 + */ + +import { NextResponse } from 'next/server'; + +import { withAuth } from '@/lib/api/helpers'; +import { errorResponse, ForbiddenError } from '@/lib/errors'; +import { TABLES_WITH_STORAGE_KEYS } from '@/lib/storage/migrate'; +import { getStorageBackend } from '@/lib/storage'; +import { S3Backend } from '@/lib/storage/s3'; +import { db } from '@/lib/db'; +import { sql } from 'drizzle-orm'; + +export const runtime = 'nodejs'; + +export const GET = withAuth(async (_req, ctx) => { + try { + if (!ctx.isSuperAdmin) { + throw new ForbiddenError('Super admin only'); + } + const backend = await getStorageBackend(); + + // Aggregate row count + total bytes across every storage-bearing table. + let fileCount = 0; + const totalBytes = 0; + for (const tbl of TABLES_WITH_STORAGE_KEYS) { + const result = await db.execute( + sql.raw( + `SELECT COUNT(*)::bigint AS n FROM ${tbl.table} WHERE ${tbl.keyColumn} IS NOT NULL`, + ), + ); + const rows = ( + Array.isArray(result) ? result : ((result as { rows?: unknown[] }).rows ?? []) + ) as Array<{ n: number | string }>; + fileCount += Number(rows[0]?.n ?? 0); + } + + return NextResponse.json({ + data: { + backend: backend.name, + fileCount, + totalBytes, + tablesTracked: TABLES_WITH_STORAGE_KEYS.map((t) => t.table), + }, + }); + } catch (error) { + return errorResponse(error); + } +}); + +export const POST = withAuth(async (_req, ctx) => { + try { + if (!ctx.isSuperAdmin) { + throw new ForbiddenError('Super admin only'); + } + const backend = await getStorageBackend(); + if (!(backend instanceof S3Backend)) { + return NextResponse.json( + { ok: false, error: 'Test connection only available for S3 backend' }, + { status: 400 }, + ); + } + const result = await backend.healthCheck(); + return NextResponse.json(result); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/berths/[id]/pdf-upload-url/handlers.ts b/src/app/api/v1/berths/[id]/pdf-upload-url/handlers.ts new file mode 100644 index 0000000..8047650 --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-upload-url/handlers.ts @@ -0,0 +1,70 @@ +/** + * Returns a presigned URL the browser can use to PUT a PDF directly to the + * active storage backend. The URL is constrained by content-length-range up + * to `system_settings.berth_pdf_max_upload_mb` (default 15 MB) per §11.1. + * + * For S3 backends this is a true signed URL; for filesystem backends it's a + * CRM-internal proxy URL with an HMAC token (see `FilesystemBackend`). + */ + +import { NextResponse } from 'next/server'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { db } from '@/lib/db'; +import { berths } from '@/lib/db/schema/berths'; +import { eq } from 'drizzle-orm'; +import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors'; +import { getMaxUploadMb } from '@/lib/services/berth-pdf.service'; +import { getStorageBackend } from '@/lib/storage'; + +interface PostBody { + fileName: string; + /** Size hint in bytes — used to early-reject oversized uploads before we + * burn a presigned URL. */ + sizeBytes?: number; +} + +export const postHandler: RouteHandler = async (req, _ctx, params) => { + try { + const body = (await req.json()) as Partial; + const fileName = (body.fileName ?? '').trim(); + if (!fileName) throw new ValidationError('fileName is required'); + + const berthRow = await db.query.berths.findFirst({ where: eq(berths.id, params.id!) }); + if (!berthRow) throw new NotFoundError('Berth'); + + const maxMb = await getMaxUploadMb(berthRow.portId); + const maxBytes = maxMb * 1024 * 1024; + if (typeof body.sizeBytes === 'number' && body.sizeBytes > maxBytes) { + throw new ValidationError( + `File exceeds ${maxMb} MB upload cap (got ${(body.sizeBytes / 1024 / 1024).toFixed(1)} MB).`, + ); + } + + // Provisional version number: the actual row insert happens in POST + // /pdf-versions and re-computes via SELECT max+1 inside a transaction, + // so a race between two reps just shifts which one wins the version + // slot. The storage key is gen_random_uuid()-namespaced so collisions + // in the storage layer are impossible. + const sanitized = fileName.replace(/[^a-zA-Z0-9._-]/g, '_').slice(0, 200) || 'berth.pdf'; + const storageKey = `berths/${params.id!}/uploads/${crypto.randomUUID()}_${sanitized}`; + + const backend = await getStorageBackend(); + const presigned = await backend.presignUpload(storageKey, { + contentType: 'application/pdf', + expirySeconds: 900, + }); + + return NextResponse.json({ + data: { + url: presigned.url, + method: presigned.method, + storageKey, + maxBytes, + backend: backend.name, + }, + }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/berths/[id]/pdf-upload-url/route.ts b/src/app/api/v1/berths/[id]/pdf-upload-url/route.ts new file mode 100644 index 0000000..8978cea --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-upload-url/route.ts @@ -0,0 +1,5 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { postHandler } from './handlers'; + +export const POST = withAuth(withPermission('berths', 'edit', postHandler)); diff --git a/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/handlers.ts b/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/handlers.ts new file mode 100644 index 0000000..30e67e0 --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/handlers.ts @@ -0,0 +1,14 @@ +import { NextResponse } from 'next/server'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { errorResponse } from '@/lib/errors'; +import { rollbackToVersion } from '@/lib/services/berth-pdf.service'; + +export const postHandler: RouteHandler = async (_req, _ctx, params) => { + try { + const result = await rollbackToVersion(params.id!, params.versionId!); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/route.ts b/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/route.ts new file mode 100644 index 0000000..8978cea --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/[versionId]/rollback/route.ts @@ -0,0 +1,5 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { postHandler } from './handlers'; + +export const POST = withAuth(withPermission('berths', 'edit', postHandler)); diff --git a/src/app/api/v1/berths/[id]/pdf-versions/handlers.ts b/src/app/api/v1/berths/[id]/pdf-versions/handlers.ts new file mode 100644 index 0000000..6fc895d --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/handlers.ts @@ -0,0 +1,88 @@ +/** + * Route handlers for `/api/v1/berths/[id]/pdf-versions` (Phase 6b). + * + * Lives in handlers.ts (not route.ts) so integration tests can call them + * directly, bypassing the auth/permission middleware (per CLAUDE.md + * "Route handler exports" convention). + */ + +import { NextResponse } from 'next/server'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { errorResponse, ValidationError } from '@/lib/errors'; +import { listBerthPdfVersions, uploadBerthPdf } from '@/lib/services/berth-pdf.service'; + +interface PostBody { + storageKey: string; + fileName: string; + fileSizeBytes: number; + sha256: string; + parseResults?: { + engine: 'acroform' | 'ocr' | 'ai'; + extracted?: Record; + meanConfidence?: number; + warnings?: string[]; + }; +} + +export const getHandler: RouteHandler = async (_req, _ctx, params) => { + try { + const versions = await listBerthPdfVersions(params.id!); + return NextResponse.json({ data: versions }); + } catch (error) { + return errorResponse(error); + } +}; + +export const postHandler: RouteHandler = async (req, ctx, params) => { + try { + const body = (await req.json()) as Partial; + if (!body.storageKey || !body.fileName) { + throw new ValidationError('storageKey and fileName are required'); + } + if (typeof body.fileSizeBytes !== 'number' || body.fileSizeBytes <= 0) { + throw new ValidationError('fileSizeBytes must be a positive integer'); + } + if (!body.sha256 || typeof body.sha256 !== 'string') { + throw new ValidationError('sha256 is required'); + } + const result = await uploadBerthPdf({ + berthId: params.id!, + storageKey: body.storageKey, + fileName: body.fileName, + fileSizeBytes: body.fileSizeBytes, + sha256: body.sha256, + uploadedBy: ctx.userId, + parseResult: body.parseResults + ? { + engine: body.parseResults.engine, + // Reconstruct just enough of the ParseResult shape to round-trip + // through serialization; the rep already saw the conflicts in the + // diff dialog, so storing the engine + extracted is what we need + // for audit. + fields: Object.fromEntries( + Object.entries(body.parseResults.extracted ?? {}).map(([k, v]) => { + if (v && typeof v === 'object' && 'value' in v) { + const obj = v as { value: unknown; confidence?: number }; + return [ + k, + { + value: obj.value as never, + confidence: typeof obj.confidence === 'number' ? obj.confidence : 1, + engine: body.parseResults!.engine, + }, + ]; + } + return [k, undefined]; + }), + ) as never, + meanConfidence: body.parseResults.meanConfidence ?? 1, + warnings: body.parseResults.warnings ?? [], + } + : undefined, + }); + return NextResponse.json({ data: result }, { status: 201 }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/handlers.ts b/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/handlers.ts new file mode 100644 index 0000000..078d73d --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/handlers.ts @@ -0,0 +1,24 @@ +import { NextResponse } from 'next/server'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { errorResponse, ValidationError } from '@/lib/errors'; +import { applyParseResults, type ExtractedBerthFields } from '@/lib/services/berth-pdf.service'; + +interface PostBody { + versionId: string; + fieldsToApply: Partial; +} + +export const postHandler: RouteHandler = async (req, _ctx, params) => { + try { + const body = (await req.json()) as Partial; + if (!body.versionId) throw new ValidationError('versionId is required'); + if (!body.fieldsToApply || typeof body.fieldsToApply !== 'object') { + throw new ValidationError('fieldsToApply must be an object'); + } + const result = await applyParseResults(params.id!, body.versionId, body.fieldsToApply); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/route.ts b/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/route.ts new file mode 100644 index 0000000..8978cea --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/parse-results/apply/route.ts @@ -0,0 +1,5 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { postHandler } from './handlers'; + +export const POST = withAuth(withPermission('berths', 'edit', postHandler)); diff --git a/src/app/api/v1/berths/[id]/pdf-versions/route.ts b/src/app/api/v1/berths/[id]/pdf-versions/route.ts new file mode 100644 index 0000000..f5f2600 --- /dev/null +++ b/src/app/api/v1/berths/[id]/pdf-versions/route.ts @@ -0,0 +1,6 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { getHandler, postHandler } from './handlers'; + +export const GET = withAuth(withPermission('berths', 'view', getHandler)); +export const POST = withAuth(withPermission('berths', 'edit', postHandler)); diff --git a/src/app/api/v1/document-sends/berth-pdf/route.ts b/src/app/api/v1/document-sends/berth-pdf/route.ts new file mode 100644 index 0000000..5706313 --- /dev/null +++ b/src/app/api/v1/document-sends/berth-pdf/route.ts @@ -0,0 +1,33 @@ +import { NextResponse } from 'next/server'; + +import { withAuth } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { sendBerthPdf } from '@/lib/services/document-sends.service'; +import { sendBerthPdfSchema } from '@/lib/validators/document-sends'; + +/** + * POST /api/v1/document-sends/berth-pdf + * + * Sends the active per-berth PDF version to a client recipient. The body + * markdown goes through the merge-field expander + sanitizer + * (`renderEmailBody`) before reaching nodemailer (§14.7 critical mitigation: + * body XSS). + */ +export const POST = withAuth(async (req, ctx) => { + try { + const input = await parseBody(req, sendBerthPdfSchema); + const result = await sendBerthPdf({ + portId: ctx.portId, + berthId: input.berthId, + recipient: input.recipient, + customBodyMarkdown: input.customBodyMarkdown, + sentBy: ctx.userId, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/document-sends/brochure/route.ts b/src/app/api/v1/document-sends/brochure/route.ts new file mode 100644 index 0000000..8e74f57 --- /dev/null +++ b/src/app/api/v1/document-sends/brochure/route.ts @@ -0,0 +1,31 @@ +import { NextResponse } from 'next/server'; + +import { withAuth } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { sendBrochure } from '@/lib/services/document-sends.service'; +import { sendBrochureSchema } from '@/lib/validators/document-sends'; + +/** + * POST /api/v1/document-sends/brochure + * + * Sends a brochure (default or specified) to a client recipient. Same + * sanitization + audit-row pipeline as the berth-pdf endpoint. + */ +export const POST = withAuth(async (req, ctx) => { + try { + const input = await parseBody(req, sendBrochureSchema); + const result = await sendBrochure({ + portId: ctx.portId, + brochureId: input.brochureId, + recipient: input.recipient, + customBodyMarkdown: input.customBodyMarkdown, + sentBy: ctx.userId, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/document-sends/preview/route.ts b/src/app/api/v1/document-sends/preview/route.ts new file mode 100644 index 0000000..6b714b3 --- /dev/null +++ b/src/app/api/v1/document-sends/preview/route.ts @@ -0,0 +1,31 @@ +import { NextResponse } from 'next/server'; + +import { withAuth } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { previewBody } from '@/lib/services/document-sends.service'; +import { previewBodySchema } from '@/lib/validators/document-sends'; + +/** + * POST /api/v1/document-sends/preview + * + * Renders a body for the dry-run UI without actually sending. Returns the + * sanitized HTML, the post-merge markdown, and the list of unresolved + * `{{tokens}}` so the UI can block submit until the rep fills them in + * (§14.7 mitigation). + */ +export const POST = withAuth(async (req, ctx) => { + try { + const input = await parseBody(req, previewBodySchema); + const result = await previewBody( + ctx.portId, + input.documentKind, + input.recipient, + input.customBodyMarkdown ?? null, + { berthId: input.berthId, brochureLabel: input.brochureId }, + ); + return NextResponse.json({ data: result }); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/document-sends/route.ts b/src/app/api/v1/document-sends/route.ts new file mode 100644 index 0000000..5b6f1a4 --- /dev/null +++ b/src/app/api/v1/document-sends/route.ts @@ -0,0 +1,23 @@ +import { NextResponse } from 'next/server'; + +import { withAuth } from '@/lib/api/helpers'; +import { parseQuery } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { listSends } from '@/lib/services/document-sends.service'; +import { listSendsQuerySchema } from '@/lib/validators/document-sends'; + +export const GET = withAuth(async (req, ctx) => { + try { + const query = parseQuery(req, listSendsQuerySchema); + const data = await listSends({ + portId: ctx.portId, + clientId: query.clientId, + interestId: query.interestId, + berthId: query.berthId, + limit: query.limit, + }); + return NextResponse.json({ data }); + } catch (error) { + return errorResponse(error); + } +}); diff --git a/src/app/api/v1/expenses/export/pdf/route.ts b/src/app/api/v1/expenses/export/pdf/route.ts index 8e1ad1d..7b7bb04 100644 --- a/src/app/api/v1/expenses/export/pdf/route.ts +++ b/src/app/api/v1/expenses/export/pdf/route.ts @@ -2,21 +2,74 @@ import { NextResponse } from 'next/server'; import { withAuth, withPermission } from '@/lib/api/helpers'; import { errorResponse } from '@/lib/errors'; -import { exportPdf } from '@/lib/services/expense-export'; -import { listExpensesSchema } from '@/lib/validators/expenses'; +import { streamExpensePdf } from '@/lib/services/expense-pdf.service'; +import { exportExpensePdfSchema } from '@/lib/validators/expenses'; + +/** + * POST /api/v1/expenses/export/pdf + * + * Streams the expense report PDF directly to the client — body bytes + * leave the process as pdfkit writes them, so the route is safe for + * hundreds of expenses with full-resolution receipt images. See + * `expense-pdf.service.ts` for the memory-budget design. + * + * Request body shape (zod-validated): + * { + * expenseIds?: string[] // explicit selection (preferred) + * filter?: {...} // listExpenses-style filter when no ids + * options: { + * documentName, subheader?, groupBy, includeReceipts, + * includeReceiptContents, includeSummary, includeDetails, + * includeProcessingFee, targetCurrency, pageFormat, + * } + * } + * + * Response: `application/pdf` binary stream + Content-Disposition. + */ +export const runtime = 'nodejs'; +export const dynamic = 'force-dynamic'; export const POST = withAuth( - withPermission('expenses', 'view', async (req, ctx) => { + withPermission('expenses', 'export', async (req, ctx) => { try { const body = await req.json().catch(() => ({})); - const query = listExpensesSchema.parse(body); - const pdf = await exportPdf(ctx.portId, query); + const input = exportExpensePdfSchema.parse(body); - return new NextResponse(Buffer.from(pdf), { + const { stream, suggestedFilename } = await streamExpensePdf({ + portId: ctx.portId, + expenseIds: input.expenseIds, + filter: input.filter + ? { + dateFrom: input.filter.dateFrom ?? null, + dateTo: input.filter.dateTo ?? null, + category: input.filter.category ?? null, + paymentStatus: input.filter.paymentStatus ?? null, + payer: input.filter.payer ?? null, + includeArchived: input.filter.includeArchived ?? false, + } + : undefined, + options: input.options, + // Forward the request abort signal so the streaming PDF builder + // stops fetching/resizing receipts the moment the client disconnects + // (otherwise an aborted 1000-receipt export keeps the worker busy + // for minutes after the user navigated away — see audit finding 2). + signal: req.signal, + }); + + // Content-Disposition filename hardening: the validator caps length + // but `\s` matches CR/LF, which would let an attacker forge response + // headers. Strip everything that isn't word/space/dot/dash, AND set + // the RFC 5987 `filename*` so a UTF-8 body still survives. + const safeFilename = suggestedFilename.replace(/[^\w. \-]+/g, '_'); + const disposition = `attachment; filename="${safeFilename}"; filename*=UTF-8''${encodeURIComponent(suggestedFilename)}`; + + return new NextResponse(stream, { status: 200, headers: { 'Content-Type': 'application/pdf', - 'Content-Disposition': `attachment; filename="expenses-${Date.now()}.pdf"`, + 'Content-Disposition': disposition, + 'Cache-Control': 'private, no-store, max-age=0', + 'X-Content-Type-Options': 'nosniff', }, }); } catch (error) { diff --git a/src/app/api/v1/interests/[id]/berths/[berthId]/handlers.ts b/src/app/api/v1/interests/[id]/berths/[berthId]/handlers.ts new file mode 100644 index 0000000..4c5c3bd --- /dev/null +++ b/src/app/api/v1/interests/[id]/berths/[berthId]/handlers.ts @@ -0,0 +1,156 @@ +import { NextResponse } from 'next/server'; +import { and, eq } from 'drizzle-orm'; +import { z } from 'zod'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors'; +import { db } from '@/lib/db'; +import { interests, interestBerths } from '@/lib/db/schema/interests'; +import { berths } from '@/lib/db/schema/berths'; +import { removeInterestBerth, upsertInterestBerth } from '@/lib/services/interest-berths.service'; +import { createAuditLog } from '@/lib/audit'; +import { emitToRoom } from '@/lib/socket/server'; + +// ─── Schemas ──────────────────────────────────────────────────────────────── + +/** + * Partial update of a junction row's role flags + EOI bypass fields. Every + * field is optional; passing only the ones the rep wants to change. + * + * `eoiBypassReason` is a tri-state: + * - omitted → no change + * - non-empty → record bypass (server stamps `eoiBypassedAt = now()` and + * `eoiBypassedBy = caller`) + * - null → clear bypass (also clears `eoiBypassedBy` / `eoiBypassedAt`) + */ +const patchBerthSchema = z + .object({ + isPrimary: z.boolean().optional(), + isSpecificInterest: z.boolean().optional(), + isInEoiBundle: z.boolean().optional(), + eoiBypassReason: z.string().max(2000).nullable().optional(), + }) + .refine((v) => Object.values(v).some((x) => x !== undefined), { + message: 'At least one field must be provided.', + }); + +// ─── Helpers ──────────────────────────────────────────────────────────────── + +async function loadScopedRow(interestId: string, berthId: string, portId: string) { + // Verify interest port-scope first so unrelated 404s look identical to a + // truly-missing row (enumeration prevention — plan §14.10). + const interest = await db.query.interests.findFirst({ + where: eq(interests.id, interestId), + }); + if (!interest || interest.portId !== portId) { + throw new NotFoundError('Interest'); + } + const link = await db.query.interestBerths.findFirst({ + where: and(eq(interestBerths.interestId, interestId), eq(interestBerths.berthId, berthId)), + }); + if (!link) { + throw new NotFoundError('Berth link'); + } + // Also confirm the berth itself is in-port; defensive against a junction row + // pointing at a foreign berth (shouldn't happen, but cheap to check). + const berth = await db.query.berths.findFirst({ + where: and(eq(berths.id, berthId), eq(berths.portId, portId)), + }); + if (!berth) { + throw new NotFoundError('Berth'); + } + return { interest, link, berth }; +} + +// ─── PATCH /api/v1/interests/[id]/berths/[berthId] ────────────────────────── +export const patchHandler: RouteHandler = async (req, ctx, params) => { + try { + const interestId = params.id!; + const berthId = params.berthId!; + const body = await parseBody(req, patchBerthSchema); + + const { interest } = await loadScopedRow(interestId, berthId, ctx.portId); + + // Plan §5.5: the bypass control is only available once the interest's + // primary EOI is signed. Defend the API too — never trust the UI to + // gate this. + if (body.eoiBypassReason !== undefined && interest.eoiStatus !== 'signed') { + throw new ValidationError('EOI bypass requires a signed primary EOI on the interest'); + } + + const updated = await upsertInterestBerth(interestId, berthId, { + isPrimary: body.isPrimary, + isSpecificInterest: body.isSpecificInterest, + isInEoiBundle: body.isInEoiBundle, + eoiBypassReason: body.eoiBypassReason, + eoiBypassedBy: body.eoiBypassReason ? ctx.userId : null, + }); + + void createAuditLog({ + userId: ctx.userId, + portId: ctx.portId, + action: 'update', + entityType: 'interest', + entityId: interestId, + newValue: { berthId, ...body }, + metadata: { type: 'berth_link_updated' }, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + + emitToRoom(`port:${ctx.portId}`, 'interest:berthLinkUpdated', { + interestId, + berthId, + }); + void import('@/lib/services/webhook-dispatch').then(({ dispatchWebhookEvent }) => + dispatchWebhookEvent(ctx.portId, 'interest:berthLinkUpdated', { + interestId, + berthId, + }), + ); + + return NextResponse.json({ data: updated }); + } catch (error) { + return errorResponse(error); + } +}; + +// ─── DELETE /api/v1/interests/[id]/berths/[berthId] ───────────────────────── +export const deleteHandler: RouteHandler = async (_req, ctx, params) => { + try { + const interestId = params.id!; + const berthId = params.berthId!; + + await loadScopedRow(interestId, berthId, ctx.portId); + + await removeInterestBerth(interestId, berthId); + + void createAuditLog({ + userId: ctx.userId, + portId: ctx.portId, + action: 'update', + entityType: 'interest', + entityId: interestId, + oldValue: { berthId }, + metadata: { type: 'berth_removed_from_interest' }, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + + emitToRoom(`port:${ctx.portId}`, 'interest:berthUnlinked', { + interestId, + berthId, + }); + void import('@/lib/services/webhook-dispatch').then(({ dispatchWebhookEvent }) => + dispatchWebhookEvent(ctx.portId, 'interest:berthUnlinked', { + interestId, + berthId, + }), + ); + + return new NextResponse(null, { status: 204 }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/interests/[id]/berths/[berthId]/route.ts b/src/app/api/v1/interests/[id]/berths/[berthId]/route.ts new file mode 100644 index 0000000..2ec4ab3 --- /dev/null +++ b/src/app/api/v1/interests/[id]/berths/[berthId]/route.ts @@ -0,0 +1,6 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { deleteHandler, patchHandler } from './handlers'; + +export const PATCH = withAuth(withPermission('interests', 'edit', patchHandler)); +export const DELETE = withAuth(withPermission('interests', 'edit', deleteHandler)); diff --git a/src/app/api/v1/interests/[id]/berths/handlers.ts b/src/app/api/v1/interests/[id]/berths/handlers.ts new file mode 100644 index 0000000..1ca32d7 --- /dev/null +++ b/src/app/api/v1/interests/[id]/berths/handlers.ts @@ -0,0 +1,109 @@ +import { NextResponse } from 'next/server'; +import { and, eq } from 'drizzle-orm'; +import { z } from 'zod'; + +import { type RouteHandler } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse, NotFoundError, ValidationError } from '@/lib/errors'; +import { db } from '@/lib/db'; +import { interests } from '@/lib/db/schema/interests'; +import { berths } from '@/lib/db/schema/berths'; +import { listBerthsForInterest, upsertInterestBerth } from '@/lib/services/interest-berths.service'; +import { createAuditLog } from '@/lib/audit'; +import { emitToRoom } from '@/lib/socket/server'; + +// ─── Schemas ──────────────────────────────────────────────────────────────── + +const addBerthSchema = z.object({ + berthId: z.string().min(1), + /** Drives the public-map "Under Offer" sub-status. See plan §5.4. */ + isSpecificInterest: z.boolean(), +}); + +// ─── GET /api/v1/interests/[id]/berths ────────────────────────────────────── +// +// Returns the linked-berths list (plan §5.5) along with the parent interest's +// `eoiStatus` so the UI can decide whether to show the EOI-bypass control. +// Tenant-scoped: 404 when the interest doesn't belong to the caller's port, +// matching the recommender route's enumeration-prevention behaviour. +export const listHandler: RouteHandler = async (_req, ctx, params) => { + try { + const interestId = params.id!; + const interest = await db.query.interests.findFirst({ + where: eq(interests.id, interestId), + }); + if (!interest || interest.portId !== ctx.portId) { + throw new NotFoundError('Interest'); + } + const links = await listBerthsForInterest(interestId); + return NextResponse.json({ + data: links, + meta: { eoiStatus: interest.eoiStatus }, + }); + } catch (error) { + return errorResponse(error); + } +}; + +// ─── POST /api/v1/interests/[id]/berths ───────────────────────────────────── +// +// Add a (non-primary) berth link to the interest. Defaults to +// `isInEoiBundle=false`, `isPrimary=false`; the rep can flip these later via +// the linked-berths list (PATCH route below). +export const addHandler: RouteHandler = async (req, ctx, params) => { + try { + const body = await parseBody(req, addBerthSchema); + const interestId = params.id!; + + const interest = await db.query.interests.findFirst({ + where: eq(interests.id, interestId), + }); + if (!interest || interest.portId !== ctx.portId) { + throw new NotFoundError('Interest'); + } + + // Tenant scope: berth must belong to this port (never trust a client- + // supplied id to cross port boundaries — plan §14.10). + const berth = await db.query.berths.findFirst({ + where: and(eq(berths.id, body.berthId), eq(berths.portId, ctx.portId)), + }); + if (!berth) { + throw new ValidationError('berthId not found in this port'); + } + + const link = await upsertInterestBerth(interestId, body.berthId, { + isSpecificInterest: body.isSpecificInterest, + addedBy: ctx.userId, + }); + + void createAuditLog({ + userId: ctx.userId, + portId: ctx.portId, + action: 'update', + entityType: 'interest', + entityId: interestId, + newValue: { berthId: body.berthId, isSpecificInterest: body.isSpecificInterest }, + metadata: { type: 'berth_added_to_interest' }, + ipAddress: ctx.ipAddress, + userAgent: ctx.userAgent, + }); + + emitToRoom(`port:${ctx.portId}`, 'interest:berthLinked', { + interestId, + berthId: body.berthId, + }); + // Outbound webhook: the legacy /link-berth path dispatched + // `interest.berth_linked` and external integrations subscribe to it. + // The new junction-add path must keep that contract. + void import('@/lib/services/webhook-dispatch').then(({ dispatchWebhookEvent }) => + dispatchWebhookEvent(ctx.portId, 'interest:berthLinked', { + interestId, + berthId: body.berthId, + }), + ); + + return NextResponse.json({ data: link }, { status: 201 }); + } catch (error) { + return errorResponse(error); + } +}; diff --git a/src/app/api/v1/interests/[id]/berths/route.ts b/src/app/api/v1/interests/[id]/berths/route.ts new file mode 100644 index 0000000..70d47f6 --- /dev/null +++ b/src/app/api/v1/interests/[id]/berths/route.ts @@ -0,0 +1,6 @@ +import { withAuth, withPermission } from '@/lib/api/helpers'; + +import { addHandler, listHandler } from './handlers'; + +export const GET = withAuth(withPermission('interests', 'view', listHandler)); +export const POST = withAuth(withPermission('interests', 'edit', addHandler)); diff --git a/src/app/api/v1/interests/[id]/recommend-berths/route.ts b/src/app/api/v1/interests/[id]/recommend-berths/route.ts new file mode 100644 index 0000000..71d2397 --- /dev/null +++ b/src/app/api/v1/interests/[id]/recommend-berths/route.ts @@ -0,0 +1,44 @@ +import { NextResponse } from 'next/server'; +import { z } from 'zod'; + +import { withAuth, withPermission } from '@/lib/api/helpers'; +import { parseBody } from '@/lib/api/route-helpers'; +import { errorResponse } from '@/lib/errors'; +import { recommendBerths } from '@/lib/services/berth-recommender.service'; + +/** + * POST body — mirrors `RecommendBerthsArgs` minus the `interestId` (route + * param) and `portId` (resolved from the auth context — never trust a + * client-supplied port, plan §14.10). + */ +const recommendBerthsSchema = z.object({ + topN: z.number().int().min(1).max(999).optional(), + maxOversizePct: z.number().min(0).max(1000).optional(), + showLateStage: z.boolean().optional(), + amenityFilters: z + .object({ + minPowerCapacityKw: z.number().min(0).optional(), + requiredVoltage: z.number().int().min(0).optional(), + requiredAccess: z.string().min(1).optional(), + requiredMooringType: z.string().min(1).optional(), + requiredCleatCapacity: z.string().min(1).optional(), + }) + .optional(), +}); + +// POST /api/v1/interests/[id]/recommend-berths +export const POST = withAuth( + withPermission('interests', 'view', async (req, ctx, params) => { + try { + const body = await parseBody(req, recommendBerthsSchema); + const data = await recommendBerths({ + interestId: params.id!, + portId: ctx.portId, + ...body, + }); + return NextResponse.json({ data }); + } catch (error) { + return errorResponse(error); + } + }), +); diff --git a/src/components/admin/brochures-admin-panel.tsx b/src/components/admin/brochures-admin-panel.tsx new file mode 100644 index 0000000..e6c64c2 --- /dev/null +++ b/src/components/admin/brochures-admin-panel.tsx @@ -0,0 +1,345 @@ +'use client'; + +/** + * Brochures admin panel (Phase 7 §5.8). + * + * Lists every brochure for the port (including archived). Lets a + * `manage_settings` admin: + * - Create new brochures. + * - Upload a new version (direct-to-storage presigned PUT, see §11.1). + * - Mark default / archive. + */ +import { useState } from 'react'; +import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'; +import { Archive, FileText, Loader2, Plus, Star, Upload } from 'lucide-react'; +import { toast } from 'sonner'; + +import { Button } from '@/components/ui/button'; +import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; +import { + Dialog, + DialogContent, + DialogDescription, + DialogFooter, + DialogHeader, + DialogTitle, +} from '@/components/ui/dialog'; +import { Input } from '@/components/ui/input'; +import { Label } from '@/components/ui/label'; +import { Textarea } from '@/components/ui/textarea'; +import { Switch } from '@/components/ui/switch'; +import { apiFetch } from '@/lib/api/client'; + +interface BrochureRow { + id: string; + label: string; + description: string | null; + isDefault: boolean; + archivedAt: string | null; + versionCount: number; + currentVersion: { + id: string; + fileName: string; + fileSizeBytes: number; + uploadedAt: string; + } | null; +} + +interface BrochuresResponse { + data: BrochureRow[]; +} + +interface UploadGrantResponse { + data: { storageKey: string; uploadUrl: string; method: 'PUT'; maxBytes: number }; +} + +export function BrochuresAdminPanel() { + const queryClient = useQueryClient(); + const [createOpen, setCreateOpen] = useState(false); + + const brochuresQuery = useQuery({ + queryKey: ['brochures', 'admin'], + queryFn: () => apiFetch('/api/v1/admin/brochures'), + }); + + const rows = brochuresQuery.data?.data ?? []; + + return ( +
+
+ +
+ + {brochuresQuery.isLoading && ( +
+ Loading… +
+ )} + + {!brochuresQuery.isLoading && rows.length === 0 && ( + + + No brochures yet. Click “New brochure” to add one. + + + )} + +
+ {rows.map((b) => ( + { + void queryClient.invalidateQueries({ queryKey: ['brochures', 'admin'] }); + void queryClient.invalidateQueries({ queryKey: ['brochures', 'list'] }); + }} + /> + ))} +
+ + { + void queryClient.invalidateQueries({ queryKey: ['brochures', 'admin'] }); + }} + /> +
+ ); +} + +function BrochureCard({ brochure, onChange }: { brochure: BrochureRow; onChange: () => void }) { + const [uploading, setUploading] = useState(false); + + const setDefaultMutation = useMutation({ + mutationFn: () => + apiFetch(`/api/v1/admin/brochures/${brochure.id}`, { + method: 'PATCH', + body: { isDefault: true }, + }), + onSuccess: () => { + toast.success('Default brochure updated'); + onChange(); + }, + }); + + const archiveMutation = useMutation({ + mutationFn: () => apiFetch(`/api/v1/admin/brochures/${brochure.id}`, { method: 'DELETE' }), + onSuccess: () => { + toast.success('Brochure archived'); + onChange(); + }, + }); + + async function handleUpload(file: File) { + setUploading(true); + try { + const grant: UploadGrantResponse = await apiFetch( + `/api/v1/admin/brochures/${brochure.id}/versions`, + ); + if (file.size > grant.data.maxBytes) { + throw new Error( + `File is too large. Max is ${(grant.data.maxBytes / 1024 / 1024).toFixed(0)}MB.`, + ); + } + // Direct-to-storage PUT (§11.1). + const putRes = await fetch(grant.data.uploadUrl, { + method: 'PUT', + body: file, + headers: { 'Content-Type': 'application/pdf' }, + }); + if (!putRes.ok) throw new Error(`Upload failed: ${putRes.status}`); + + const sha = await sha256Hex(file); + await apiFetch(`/api/v1/admin/brochures/${brochure.id}/versions`, { + method: 'POST', + body: { + storageKey: grant.data.storageKey, + fileName: file.name, + fileSizeBytes: file.size, + contentSha256: sha, + }, + }); + toast.success('New version uploaded'); + onChange(); + } catch (err) { + toast.error(err instanceof Error ? err.message : 'Upload failed'); + } finally { + setUploading(false); + } + } + + return ( + + + + + {brochure.label} + {brochure.isDefault && ( + + default + + )} + {brochure.archivedAt && ( + + archived + + )} + + {brochure.versionCount} versions + + + + {brochure.description && ( +

{brochure.description}

+ )} + {brochure.currentVersion && ( +

+ Latest: {brochure.currentVersion.fileName} ( + {(brochure.currentVersion.fileSizeBytes / 1024 / 1024).toFixed(2)} MB,{' '} + {new Date(brochure.currentVersion.uploadedAt).toLocaleDateString()}) +

+ )} +
+ {!brochure.archivedAt && ( + <> + + {!brochure.isDefault && ( + + )} + + + )} +
+
+
+ ); +} + +function CreateBrochureDialog({ + open, + onOpenChange, + onCreated, +}: { + open: boolean; + onOpenChange: (o: boolean) => void; + onCreated: () => void; +}) { + const [label, setLabel] = useState(''); + const [description, setDescription] = useState(''); + const [isDefault, setIsDefault] = useState(false); + + const createMutation = useMutation({ + mutationFn: () => + apiFetch('/api/v1/admin/brochures', { + method: 'POST', + body: { + label, + description: description || null, + isDefault, + }, + }), + onSuccess: () => { + toast.success('Brochure created. Upload a version next.'); + setLabel(''); + setDescription(''); + setIsDefault(false); + onCreated(); + onOpenChange(false); + }, + }); + + return ( + + + + New brochure + + Create the brochure container, then upload a PDF version on the card that appears. + + +
+
+ + setLabel(e.target.value)} + placeholder="General overview" + /> +
+
+ +