59b9e8f177ebd00ec69c187854bfebe8a07ebb73
107 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
758d8628cf |
test(client-archive): destructive smoke for smart-archive + smart-restore
Walks the new dossier dialog end-to-end on a freshly-created client and asserts the toast + list refresh. Companion test exercises the smart- restore wizard. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
70105715a7 |
feat(clients): hard-delete with email-code confirmation (single + bulk)
Permanent client deletion is now reachable from: - archived single-client detail page (icon button, gated by new admin.permanently_delete_clients perm) - archived clients list bulk action Both flows are 2-stage: request a 4-digit code (sent to operator's account email, 10min Redis TTL), then enter both code AND a typed confirmation (client name single, "DELETE N CLIENTS" bulk). Cascade strategy preserves audit trails: signed documents, email threads, files and reminders are detached but retained; addresses, contacts, notes, portal user, GDPR records, interests and reservations are deleted via FK cascade or explicit tx delete. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
789656bc70 |
feat(interests): manual stage override + Residential Partner system role
Manual stage override
Sales reps need to skip canTransitionStage rules when the data was
entered out of order — e.g. recording a contract_signed deal whose
earlier stages were never tracked in the system.
- New permission flag interests.override_stage in RolePermissions.
Plumbed through the schema TS type, the role-editor UI, the seed
file's pre-built roles (super_admin/director/sales_manager get it,
sales_agent + viewer don't), and the test factories.
- changeStageSchema gains an optional `override` boolean and the
service checks it before evaluating canTransitionStage. When
override=true the reason field becomes required (min 5 chars) and
is recorded in the audit log.
- The route handler gates `override` on the new permission so a
sales_agent without it can't pass override=true and bypass.
- InterestStagePicker auto-detects when the requested transition is
blocked by the table and switches into "override mode" — shows an
amber warning, requires the reason, button label flips to
"Override stage". When the operator lacks the permission, the
warning is red and the button is disabled.
Residential Partner role
Per the smart-archive scoping conversation: external partners who
handle residential inquiries shouldn't see marina clients, yachts,
berths, or financials. The two residential_* permission groups
already exist; this commit just seeds a pre-built system role
("residential_partner") with those flags + minimal own-reminders, so
admins can invite a partner today via /admin/users without manually
building the permission set.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
4bab6de8be |
test(audit-tier-5): webhook + cross-port test coverage
Closes the highest-priority gaps from audit HIGH §19 + MED §§20–21: * New tests/integration/documenso-webhook-route.test.ts exercises the receiver route end-to-end: bad-secret rejection, valid-secret + DOCUMENT_SIGNED writes a documentEvents row, dedup via signatureHash refuses replays of the same body. * tests/integration/documents-expired-webhook.test.ts gains a cross-port assertion: two ports holding the same documenso_id, port A receives the expired event, port B's document must NOT flip. Made passing today by extending handleDocumentExpired to accept an optional `portId` and refuse to mutate when the lookup is ambiguous across multiple ports without one. * tests/integration/custom-fields.test.ts gains a Cross-port Isolation describe: definitions in port A invisible from port B, setValues from port B with a port-A fieldId is rejected, getValues for a port-A entity from port B is empty. Deferred: Tier 5.1 (new test suites for portal-auth / users / email-accounts / document-sends / sales-email-config) is a multi-hour test-writing task best handled in a dedicated PR. Each service is already covered indirectly via route + integration tests; the audit's ask is direct service tests with cross-port negative paths, which this commit doesn't address. Test status: 1175/1175 vitest (was 1168), tsc clean. Refs: docs/audit-comprehensive-2026-05-05.md HIGH §19 (auditor-J Issue 2) + MED §§20–21 (auditor-J Issues 3–4). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
fc7595faf8 |
fix(audit-tier-2): error-surface hygiene — toastError + CodedError sweep
Two mechanical sweeps closing the audit's HIGH §16 + MED §11 findings: * 38 client components / 56 toast.error sites converted to toastError(err) so the new admin error inspector becomes usable from user-reported issues — every failed inline-edit, save, send, archive, upload, etc. now carries the request-id + error-code (Copy ID action). * 26 service files / 62 bare-Error throws converted to CodedError or the existing AppError subclasses. Adds new error codes: DOCUMENSO_UPSTREAM_ERROR (502), DOCUMENSO_AUTH_FAILURE (502), DOCUMENSO_TIMEOUT (504), OCR_UPSTREAM_ERROR (502), IMAP_UPSTREAM_ERROR (502), UMAMI_UPSTREAM_ERROR (502), UMAMI_NOT_CONFIGURED (409), and INSERT_RETURNING_EMPTY (500) for post-insert returning-empty guards. * Five vitest assertions updated to match the new user-facing wording (client-merge "already been merged", expense/interest "couldn't find that …", documenso "signing service didn't respond"). Test status: 1168/1168 vitest, tsc clean. Refs: docs/audit-comprehensive-2026-05-05.md HIGH §16 (auditor-H Issue 1) + MED §11 (auditor-G Issue 1). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
6a609ecf94 |
fix(audit-tier-1): timeouts, lifecycle, per-port Documenso, FK constraints
Closes the second wave of HIGH-priority audit findings: * fetchWithTimeout helper (new src/lib/fetch-with-timeout.ts) wraps Documenso, OCR, currency, Umami, IMAP, etc. — a hung upstream can no longer pin a worker concurrency slot indefinitely. OpenAI client passes timeout: 30_000. ImapFlow gets socket / greeting / connection timeouts. * SIGTERM / SIGINT handler in src/server.ts drains in-flight HTTP, closes Socket.io, and disconnects Redis before exit; compose stop_grace_period bumped to 30s. Adds closeSocketServer() helper. * env.ts gains zod-validated PORT and MULTI_NODE_DEPLOYMENT, and filesystem.ts now reads from env (a typo can no longer silently disable the multi-node guard). * Per-port Documenso template + recipient IDs land in system_settings with env fallback (PortDocumensoConfig now exposes eoiTemplateId, clientRecipientId, developerRecipientId, approvalRecipientId). document-templates.ts uses the per-port config and threads portId into documensoGenerateFromTemplate(). * Migration 0042 wires the eleven HIGH-tier missing FK constraints (documents/files/interests/reminders/berth_waiting_list/ form_submissions) plus polymorphic CHECK round 2 (yacht_ownership_history.owner_type, document_sends.document_kind), invoices.billing_entity_id NOT EMPTY, and clients.merged_into self-FK. Drizzle schema columns updated to .references(...) where possible so the misleading "FK wired in relations.ts" comments are gone. Test status: 1168/1168 vitest, tsc clean. Refs: docs/audit-comprehensive-2026-05-05.md HIGH §§5,6,7,8,9,10 + MED §§14,15,16,18. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
312779c0c5 |
fix(security): tier-0 audit blockers (next CVE, role gate, perm traps, key validation, rate limits)
Closes the five highest-risk findings from docs/audit-comprehensive-2026-05-05.md so the platform is not exposed while the rest of the audit backlog (1 CRIT + 18 HIGH + 32 MED + 23 LOW) is worked through: * CVE-2025-29927 — bump next 15.1.0 → 15.2.9; nginx strips X-Middleware-Subrequest at the edge as defense-in-depth. * Cross-tenant role escalation — POST/PATCH/DELETE on /admin/roles now require super-admin (was: any holder of admin.manage_users). Adds shared `requireSuperAdmin(ctx)` helper. * Silent-403 traps — `documents.edit` and `files.edit` keys added to RolePermissions; seeded role values updated; migration 0041 backfills the new keys on every existing roles+port_role_overrides JSONB. File routes remap the dead `create` action to `upload` / `manage_folders`. * Berth-PDF / brochure register endpoints — reject body.storageKey unless it matches the namespace the matching presign endpoint issued (prevents repointing a tenant's PDF at foreign-port bytes). * Portal auth rate limits — sign-in 5/15min/(ip,email), forgot-password 3/hr/IP, activate/reset/set-password 10/hr/IP. Adds `enforcePublicRateLimit()` for non-`withAuth` routes. Test status unchanged: 1168/1168 vitest, tsc clean. Refs: docs/audit-comprehensive-2026-05-05.md (CRITICAL, HIGH §§1–4) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4723994bdc |
feat(errors): platform-wide request ids + error codes + admin inspector
End-to-end error-handling overhaul. A user hitting any failure now sees
a plain-text message + stable error code + reference id. A super admin
can paste the id into /admin/errors/<id> for the full request shape,
sanitized body, error stack, and a heuristic likely-cause hint.
REQUEST CONTEXT (AsyncLocalStorage)
- src/lib/request-context.ts mints a per-request frame carrying
requestId + portId + userId + method + path + start timestamp.
- withAuth wraps every authenticated handler in runWithRequestContext
and accepts an upstream X-Request-Id header (validated shape) or
generates a fresh UUID. The id ALWAYS leaves on the X-Request-Id
response header, including early-return 401/403/4xx paths.
- Pino logger reads from the same context via mixin — every log
line emitted during the request automatically carries the ids
with no per-call threading.
ERROR CODE REGISTRY
- src/lib/error-codes.ts defines stable DOMAIN_REASON codes with
HTTP status + plain-text user-facing message (no jargon, written
for the rep on the phone with a customer).
- New CodedError class wraps a registered code + optional
internalMessage (admin-only — never sent to client).
- Existing AppError subclasses got plain-text default rewrites so
legacy throw sites improve immediately without migration.
- High-impact services migrated to specific codes:
expenses (RECEIPT_REQUIRED, INVOICE_LINKED), interest-berths
(CROSS_PORT_LINK_REJECTED), berth-pdf (PDF_MAGIC_BYTE / PDF_EMPTY /
PDF_TOO_LARGE / VERSION_ALREADY_CURRENT), recommender
(INTEREST_PORT_MISMATCH).
ERROR ENVELOPE
- errorResponse always sets X-Request-Id header + requestId field.
- 5xx responses include a "Quote error ID …" friendly line.
- 4xx kept clean (validation, permission, not-found don't pollute
the inspector — they're already in audit log).
PERSISTENCE (error_events table, migration 0040)
- One row per 5xx, keyed on requestId, with method/path/status/error
name+message/stack head (4KB cap)/sanitized body excerpt (1KB cap;
password/token/secret/etc keys redacted)/duration/IP/UA/metadata.
- captureErrorEvent extracts Postgres SQLSTATE/severity/cause.code
so the classifier can recognize FK / unique / NOT NULL / schema-
drift violations.
- Failure to persist is logged-not-thrown.
LIKELY-CULPRIT CLASSIFIER (src/lib/error-classifier.ts)
- 4-pass heuristic (first match wins):
1. Postgres SQLSTATE → human reason (23503 FK, 23505 unique,
42703 schema drift, 53300 connection limit, …)
2. Error class name (AbortError, TimeoutError, FetchError,
ZodError)
3. Stack-path patterns (/lib/storage/, /lib/email/, documenso,
openai|claude, /queue/workers/)
4. Free-text message keywords (econnrefused, rate limit, timeout,
unauthorized|invalid api key)
- Returns { label, hint, subsystem } for the inspector badge.
CLIENT SIDE
- apiFetch throws structured ApiError with message + code + requestId
+ details + retryAfter.
- toastError() helper renders the standard 3-line toast:
plain message / Error code: X / Reference ID: Y [Copy ID].
ADMIN INSPECTOR
- /<port>/admin/errors lists captured 5xx with status badge + path +
likely-culprit badge + truncated message + reference id. Filter by
status code; auto-refresh via TanStack Query.
- /<port>/admin/errors/<requestId> deep-dive: request shape, full
error name+message+stack, sanitized body excerpt, raw metadata,
registered-code lookup (so admin can compare to what user saw),
likely-culprit hint with subsystem tag.
- /<port>/admin/errors/codes is the in-app code reference page —
every registered code grouped by domain prefix, searchable, with
HTTP status + user message inline. Linked from inspector header
so admins can flip to it while triaging.
- Permission: admin.view_audit_log. Super admins see all ports;
regular admins port-scoped.
- system-monitoring dashboard now surfaces error_events alongside
permission_denied audit + queue failed jobs (RecentError gains
source: 'request' variant).
DOCS
- docs/error-handling.md walks through coded errors, plain-text
message guidelines, client toasting, admin inspector usage,
persistence rules, classifier internals, pruning, and the
legacy → CodedError migration path.
MIGRATION SAFETY
- Audit confirmed all 41 migrations (0000-0040) apply cleanly in
journal order against an empty DB. 0040 references ports(id)
which exists from 0000. 0035/0038 don't deadlock under sequential
psql -f. Removed redundant idx_ds_sent_by from 0038 (created in
0037).
Tests: 1168/1168 vitest passing. tsc clean.
- security-error-responses tests updated for plain-text messages
+ new optional response keys (code/requestId/message).
- berth-pdf-versions tests assert stable error codes via
toMatchObject({ code }) rather than message regex.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
ade4c9e77d |
fix(audit-v2): platform-wide post-merge hardening across 5 domains
Five-domain audit (security, routes, DB, integrations, UI/UX) ran after
the
|
||
|
|
d4b3a1338f |
fix(security): scope berth-pdf service entrypoints by portId
Post-merge security review caught a cross-tenant authorization bypass
in the per-berth PDF endpoints (HIGH severity, confidence 10):
GET /api/v1/berths/[id]/pdf-versions
POST /api/v1/berths/[id]/pdf-versions
POST /api/v1/berths/[id]/pdf-upload-url
POST /api/v1/berths/[id]/pdf-versions/[versionId]/rollback
POST /api/v1/berths/[id]/pdf-versions/parse-results/apply
Each handler looked up the target berth by id only — `eq(berths.id, ...)`.
withAuth resolves ctx.portId from the user-controlled X-Port-Id header
(only verifying the user has SOME role on that port), and
withPermission('berths', 'view'|'edit', ...) is a coarse capability
check, not a row-level grant. A rep with berths:edit on Port A could
supply a Port B berth UUID and:
- list + receive 15-min presigned download URLs to every PDF version
- mint an upload URL targeting `berths/<port-B-id>/uploads/...`
- POST a new version (overwriting current_pdf_version_id on foreign berth)
- rollback to any prior version on a foreign berth
- apply rep-confirmed parse-result fields onto a foreign berth's columns
Sibling routes (waiting-list etc.) already pair the id filter with
`eq(berths.portId, ctx.portId)`, so this was an omission, not design.
Fix:
- Push `portId: string` into uploadBerthPdf, listBerthPdfVersions,
rollbackToVersion, applyParseResults, reconcilePdfWithBerth.
- Each function now filters the berth lookup with
`and(eq(berths.id, ...), eq(berths.portId, portId))` and throws
NotFoundError on mismatch (no foreign-port disclosure).
- Inline the same `and(...)` filter in the pdf-upload-url handler.
- Every handler passes ctx.portId through.
Coverage:
- New `cross-port tenant guard` test exercises every entrypoint with a
foreign-port id and asserts NotFoundError.
- 1164/1164 vitest passing. Typecheck clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
86372a857f |
fix(audit): post-review hardening across phases 0-7
15 of 17 findings from the consolidated audit (3 reviewer agents on the previously-shipped phase commits). Remaining two are nice-to-have follow-ups deferred. Critical (data integrity / security): - Public berths API: closed-deal junction rows no longer flip a berth to "Under Offer" - filter on `interests.outcome IS NULL` so won/ lost/cancelled don't pollute public-map status. Both list + single-mooring routes. - Recommender heat: cancelled outcomes now count as fall-throughs (SQL was `LIKE 'lost%'` which silently dropped them, leaving cancelled-only berths stuck in tier A). - Filesystem presignDownload returns an absolute URL (origin from APP_URL) so emailed download links resolve from external mail clients. - Magic-byte verification on the presigned-PUT path: both per-berth PDFs and brochures stream the first 5 bytes via the storage backend and reject + delete on `%PDF-` mismatch (was only enforced when the server saw the buffer; presign-PUT was wide open). - Replay-protection TTL aligned to the token's own expiry (was a fixed 30 min, but send-out tokens live 24 h). Floor 60 s, ceiling 25 days. - Brochures unique partial index on (port_id) WHERE is_default=true + 0032 migration. Closes the read-then-write race in the create/ update transactions. Important: - Recommender SQL: defense-in-depth `i.port_id = $portId` filter on the aggregates CTE. - berth-pdf service: per-berth pg_advisory_xact_lock around the version-number SELECT + insert. Storage key is now UUID-based so concurrent uploads can't collide on blob paths. Replaces `nextVersionNumber` with the tx-bound variant. - berth-pdf apply: rejects with ConflictError when parse_results contain a mooring-mismatch warning unless the caller passes `confirmMooringMismatch: true` (force-reconfirm gate was UI-only). - Send-out body: HTML-escape brochure filename in the download-link fallback (XSS guard). - parseDecimalWithUnit rejects negative numbers. - listClients DISTINCT ON for primary contact resolution: bounds contact-row count to ~2 per client. Defensive: - verifyProxyToken rejects NaN/Infinity expiries via Number.isFinite. - Replaced sql ANY() with inArray() in interest-berths. Tests: 1145 -> 1163 passing. Deferred: bulk-send rate limit (no bulk endpoint today), markdown italic regex breaking links with asterisks (cosmetic). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
b4776b4c3c |
feat(interests): linked berths list with role-flag toggles + EOI bypass
Implements plan §5.5: a per-interest "Linked berths" panel mounted above the
recommender on the interest detail Overview tab. Each junction row exposes
the role-flag controls reps need to manage the M:M `interest_berths` link
without the legacy single-berth flow.
UI (`src/components/interests/linked-berths-list.tsx`)
* Rows ordered with primary first; mooring number links to /berths/[id], with
area + a status pill (available/under_offer/sold) and a "Primary" chip.
* "Specifically pitching" Switch (writes `is_specific_interest`) with the
consequence text from §1: "This berth will appear as under interest on the
public map" / "This berth is hidden from the public map".
* "Mark in EOI bundle" Switch (writes `is_in_eoi_bundle`).
* "Set as primary" button when the row isn't primary - the existing
`upsertInterestBerth` helper demotes the prior primary in the same tx.
* "Bypass EOI for this berth" with reason textarea, ONLY rendered when the
parent interest's `eoiStatus === 'signed'`. Writes the bypass triple
(`eoi_bypass_reason`, `eoi_bypassed_by` = caller, `eoi_bypassed_at` = now);
also supports clearing.
* Remove-from-interest action gated by a confirmation dialog.
API (`src/app/api/v1/interests/[id]/berths/...`)
* `GET /` - list endpoint returning `listBerthsForInterest` plus the parent
interest's `eoiStatus` in `meta.eoiStatus` so the UI can decide whether to
show the bypass control.
* `PATCH /[berthId]` - partial update of the junction row's flags + bypass
fields. Server-side guard: rejects bypass writes when `eoiStatus !==
'signed'` (defence in depth - never trust the UI to gate this).
* `DELETE /[berthId]` - calls `removeInterestBerth`.
* The existing POST stays unchanged. All routes wrapped with
`withAuth(withPermission('interests', view|edit, ...))`. portId from ctx;
cross-port reads/writes return 404 for enumeration prevention (§14.10).
Service changes (`src/lib/services/interest-berths.service.ts`)
* `upsertInterestBerth` now accepts `eoiBypassReason` (tri-state: omit = no
change, non-empty = record, null = clear) and `eoiBypassedBy`. The bypass
triple moves as a unit, with `eoi_bypassed_at` stamped server-side.
* `listBerthsForInterest` now returns berth detail (area, status, dimensions)
alongside the junction row, typed as `InterestBerthWithDetails`.
Socket: added `interest:berthLinkUpdated` event for live UI refreshes.
Tests: 18 new integration tests in `tests/integration/api/interest-berths.test.ts`
covering happy paths, primary-demotion in same tx, bypass write/clear, the
"requires signed EOI" guard, cross-port 404s, missing-link 404s, empty-body
400, and viewer 403 through the permission gate.
|
||
|
|
a0091e4ca6 |
feat(emails): sales send-out flows + brochures + email-from settings
Phase 7 of the berth-recommender refactor (plan §3.3, §4.8, §4.9, §5.7,
§5.8, §5.9, §11.1, §14.7, §14.9). Adds the rep-driven send-out path for
per-berth PDFs and port-wide brochures, the per-port sales SMTP/IMAP
config + body templates, and the supporting admin UI.
Migration: 0031_brochures_and_document_sends.sql
Schema additions:
- brochures (port-wide, with isDefault marker + archive)
- brochure_versions (versioned uploads, storageKey per §4.7a)
- document_sends (audit log of every rep-initiated send; failures
captured with failedAt + errorReason). berthPdfVersionId is a plain
text column (no FK) — loose-coupled to Phase 6b's berth_pdf_versions
so the two phases stay independent.
§14.7 critical mitigations:
- Body XSS: rep-authored markdown goes through renderEmailBody()
(HTML-escape first, then a tight allowlist of bold/italic/code/link
rules). https:// + mailto: only — javascript:/data: URLs stripped.
Tested against script/img/iframe/svg/onerror polyglots.
- Recipient typo: strict email regex + two-step confirm modal that
shows the exact recipient before send.
- Unresolved merge fields: pre-send dry-run /preview endpoint blocks
submission until findUnresolvedTokens() returns empty.
- SMTP failure: every transport rejection writes a document_sends row
with failedAt + errorReason; UI surfaces the message.
- Hourly per-user rate limit: 50 sends/user/hour via existing
checkRateLimit().
- Size threshold fallback (§11.1): files above
email_attach_threshold_mb (default 15) ship as a 24h signed-URL
download link in the body instead of an attachment. Storage stream
flows directly to nodemailer to avoid buffering 20MB+.
§14.10 critical mitigation:
- SMTP/IMAP passwords encrypted at rest via the existing
EMAIL_CREDENTIAL_KEY (AES-256-GCM). The /api/v1/admin/email/
sales-config GET endpoint never returns the decrypted value — only
a *PassIsSet boolean. PATCH treats empty string as "leave unchanged"
and explicit null as "clear", so the masked-placeholder UI round-
trips without forcing re-entry on every save.
system_settings keys (per-port unless noted):
- sales_from_address, sales_smtp_{host,port,secure,user,pass_encrypted}
- sales_imap_{host,port,user,pass_encrypted}
- sales_auth_method (default app_password)
- noreply_from_address
- email_template_send_berth_pdf_body, email_template_send_brochure_body
- brochure_max_upload_mb (default 50)
- email_attach_threshold_mb (default 15)
UI surfaces (per §5.7, §5.8, §5.9):
- <SendDocumentDialog> shared 2-step compose+confirm flow.
- <SendBerthPdfDialog>, <SendDocumentsDialog>, <SendFromInterestButton>
wrappers per detail page.
- /[portSlug]/admin/brochures: list, upload (direct-to-storage
presigned PUT for the 20MB+ files per §11.1), default toggle,
archive.
- /[portSlug]/admin/email extended with <SalesEmailConfigCard>:
SMTP + IMAP creds, body templates, threshold/max settings.
Storage: every upload + download goes through getStorageBackend() —
no direct minio imports, per Phase 6a contract.
Tests: 1145 vitest passing (+ 50 new in
markdown-email-sanitization.test.ts, document-sends-validators.test.ts,
sales-email-config-validators.test.ts).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
249ffe3e4a |
feat(berths): per-berth PDF storage (versioned) + reverse parser
Phase 6b of the berth-recommender refactor (see
docs/berth-recommender-and-pdf-plan.md §3.2, §3.3, §4.7b, §11.1, §14.6).
Builds on the Phase 6a pluggable storage backend (commit
|
||
|
|
83693dd993 |
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
e00e812199 |
feat(eoi): multi-berth EOI generation + berth-range formatter
Plan §4.6 + §1: a render function that compresses every berth marked
is_in_eoi_bundle=true on an interest into a compact range string
("A1-A3, B5-B7"), wired into both EOI generation paths (the Documenso
template-generate call and the in-app pdf-lib AcroForm fill).
- src/lib/templates/berth-range.ts: pure formatBerthRange() with the
full edge-case set from §4.6 - empty, single, run, gap, multiple
prefixes, sort/dedup, multi-letter prefixes, non-canonical
passthrough, long ranges. Sorts by (prefix, number); dedupes; passes
non-canonical inputs through with a logger warning.
- src/lib/templates/merge-fields.ts: new {{eoi.berthRange}} token
added to VALID_MERGE_TOKENS allow-list under a fresh `eoi` scope so
unknown-token validation at template creation time still rejects
typos.
- src/lib/services/eoi-context.ts: EoiContext gains eoiBerthRange.
Resolved by joining interest_berths (is_in_eoi_bundle=true) →
berths and feeding the mooring numbers through formatBerthRange.
- src/lib/services/documenso-payload.ts: formValues now includes
"Berth Range" alongside the legacy "Berth Number". Multi-berth EOIs
surface here; single-berth EOIs duplicate the primary.
- src/lib/pdf/fill-eoi-form.ts: in-app AcroForm fill mirrors the
Documenso payload by populating "Berth Range". Falls back silently
when older PDFs don't have the field (setText is no-op-on-missing).
15 unit tests on the formatter; existing EoiContext + Documenso
payload tests updated to assert the new field. 1022 -> 1037 passing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
b1e787e55c |
feat(recommender): SQL ranking + tier ladder + heat scoring
Plan §4.4 + §13: pure SQL recommender, no AI. Single CTE chain (feasible -> aggregates) + JS-side tier classification, fall-through cooldown filter, heat scoring, and fit ranking. Per-port settings via system_settings layered over global + DEFAULT_RECOMMENDER_SETTINGS. Tier ladder (default): A : no interest history B : lost-only history (still recommendable + boosted by heat) C : active interest in early stage (open..eoi_signed) D : active interest at deposit_10pct or beyond (hidden by default) Heat (only for tier B): recency weight 30 full @ <=30 days, decays to 0 @ 365 days furthest stage weight 40 full when prior reached deposit interest count weight 15 saturates at 5+ EOI count weight 15 saturates at 3+ Multi-port isolation enforced (§14.10 critical): the SQL filters by port_id AND the entry-point function rejects cross-port interest lookups with an explicit error. Fall-through policy supports immediate_with_heat (default), cooldown, and never_auto_recommend. 15 unit tests covering tier classification, heat saturation, weight tuning, zero-weight guard. Smoke-tested end-to-end via scripts/dev-recommender-smoke.ts. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
fb1116f1d4 |
feat(berths): public berths API + health env-match endpoint
Adds the read-only public-website data feed promised by plan §4.5 and
§7.3. The marketing site's `getBerths()` swap is now a one-line URL
change against the existing 5-min TTL behaviour.
- src/app/api/public/berths/route.ts: GET / unauth, returns the full
port-nimara berth list as { list, pageInfo } in the verbatim NocoDB
shape ("Mooring Number", "Side Pontoon", quoted-key fields). Cache:
s-maxage=300 + stale-while-revalidate=60. portSlug query param lets
future ports opt in.
- src/app/api/public/berths/[mooringNumber]/route.ts: GET single. Up-
front regex validation (^[A-Z]+\\d+$) rejects malformed lookups with
400 + cache-control:no-store before hitting the DB. 404 + no-store
when not found.
- src/app/api/public/health/route.ts: returns { status, env, appUrl,
timestamp } so the marketing site can refuse to start when its
CRM_PUBLIC_URL points at a different deployment env (§14.8 critical
env-mismatch protection).
- src/lib/services/public-berths.ts: pure mapper with derivePublicStatus
("sold" wins; otherwise specific-interest junction OR
status='under_offer' -> "Under Offer"; else "Available").
- 11 unit tests covering numeric coercion, status derivation,
archived-berth handling, missing-map-data omission, and the
status-precedence rule that "sold" trumps the specific-interest
signal.
Smoke-tested: /api/public/berths -> 117 rows, A1 correctly shows
"Under Offer" (has interest_berths.is_specific_interest=true link),
INVALID -> 400, Z99 -> 404. Total tests: 996 -> 1007.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
57cbc9a506 |
fix(tests): cascade interest_berths in global teardown
The Phase 2b refactor (commit
|
||
|
|
6e3d910c76 |
refactor(interests): migrate callers to interest_berths junction + drop berth_id
Phase 2b of the berth-recommender refactor (plan §3.4). Every caller of the legacy `interests.berth_id` column now reads / writes through the `interest_berths` junction via the helper service introduced in Phase 2a; the column itself is dropped in a final migration. Service-layer changes - interests.service: filter `?berthId=X` becomes EXISTS-against-junction; list enrichment uses `getPrimaryBerthsForInterests`; create/update/ linkBerth/unlinkBerth all dispatch through the junction helpers, with createInterest's row insert + junction write sharing a single transaction. - clients / dashboard / report-generators / search: leftJoin chains pivot through `interest_berths` filtered by `is_primary=true`. - eoi-context / document-templates / berth-rules-engine / portal / record-export / queue worker: read primary via `getPrimaryBerth(...)`. - interest-scoring: berthLinked is now derived from any junction row count. - dedup/migration-apply + public interest route: write a primary junction row alongside the interest insert when a berth is provided. API contract preserved: list/detail responses still emit `berthId` and `berthMooringNumber`, derived from the primary junction row, so frontend consumers (interest-form, interest-detail-header) need no changes. Schema + migration - Drop `interestsRelations.berth` and `idx_interests_berth`. - Replace `berthsRelations.interests` with `interestBerths`. - Migration 0029_puzzling_romulus drops `interests.berth_id` + the index. - Tests that previously inserted `interests.berthId` now seed a primary junction row alongside the interest. Verified: vitest 995 passing (1 unrelated pre-existing flake in maintenance-cleanup.test.ts), tsc clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
18119644ae |
feat(berths): nocodb berth import script + helpers + unit tests
Idempotent NocoDB Berths -> CRM `berths` import script with full re-run safety. Re-running picks up NocoDB additions/edits without clobbering CRM-side overrides (compares updated_at vs last_imported_at, 1-second tolerance for sub-second clock drift). --force overrides the edit guard. Mitigates the §14.1 critical/high cases: - Mooring collisions: unique (port_id, mooring_number) on the table. - Concurrent runs: pg_advisory_xact_lock on a stable BIGINT key. - Numeric-with-units inputs: parseDecimalWithUnit() strips trailing ft/m/kw/v/usd/$ markers before parsing. - Metric drift: NocoDB's metric formula columns are ignored; metric values recomputed from imperial via 0.3048 + round-to-2-decimals to match NocoDB's `precision: 2` columns and avoid spurious diffs. - Map Data shape: zod-validated; failures are skipped rather than aborting the import. - Status enum mapping: NocoDB display strings -> CRM snake_case. - NocoDB row deleted: reported as "orphaned in CRM"; never auto- deleted (rep decides via admin UI in a future phase). Pure helpers (parseDecimalWithUnit, mapStatus, parseMapData, extractNumerics, mapRow, buildPlan) live in src/lib/services/berth-import.ts so vitest can exercise the mapping logic without triggering the script's top-level db connection. 40 new unit tests (956 -> 996 passing). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
05be89ec6f |
feat(berths): normalize mooring numbers to canonical form
Sweep CRM mooring numbers from the legacy hyphen+padded form ("A-01")
to the canonical bare form ("A1") used by NocoDB, the public website,
the per-berth PDFs, and the Documenso EOI templates. Drift was
introduced by the original load-berths-to-port-nimara.ts seed; this
gates the Phase 3 public-website cutover where /berths/A1 URLs would
404 against a CRM still storing "A-01".
- 0024 data migration: idempotent regexp_replace + post-update sanity
check that surfaces any non-conforming rows for manual triage.
- Invert normalizeLegacyMooring in dedup/migration-apply: it now
canonicalizes ("D-32" -> "D32") instead of legacy-izing.
- Update tiptap-to-pdfme example tokens, EOI fixture moorings, and
smoke-test seed moorings.
- Refresh seed-data/berths.json to canonical form; drop the now-
redundant legacyMooringNumber field.
- Delete scripts/load-berths-to-port-nimara.ts (superseded in 0c).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
d62822c284 |
fix(migration): NocoDB import safety + dedup helpers + lead-source backfill
migration-apply: residential client + interest inserts now wrap in db.transaction so a partial failure can't leave an orphan client row without its interest (or vice versa). migration-transform: buildPlannedDocument returns null when there are no signers so the apply pass doesn't try to send a Documenso envelope without recipients. mapDocumentStatus gets an explicit "Awaiting Further Details" branch that no longer auto-promotes via stale sign-time fields. parseFlexibleDate handles ISO and DD-MM-YYYY inputs uniformly. backfill-legacy-lead-source: chunk UPDATE WHERE clause now isNull(source) on top of the inArray match, so a re-run can't overwrite a more accurate source written between batches. Adds 235 lines of vitest coverage on migration-transform. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
49d34e00c8 |
feat(website-intake): dual-write endpoint + migration chain repair
Adds website_submissions table + shared-secret POST endpoint so the marketing site can dual-write inquiries alongside its NocoDB write. Race-safe via INSERT ... ON CONFLICT, idempotent on submission_id, refuses every request when WEBSITE_INTAKE_SECRET is unset. Also repairs pre-existing 0020/0021/0022 prevId collision (renumbered + journal re-sorted) so db:generate works again. 11 unit tests. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
872c75f1a1 |
fix(safety): plug 3 EMAIL_REDIRECT_TO leaks + 10 unit tests + live smoke
A pre-import audit caught three places where outbound comms could escape
even with EMAIL_REDIRECT_TO set. Plugged each, added unit tests so the
behavior can't silently regress, and shipped a live smoke script the
operator can run before any production data import.
Leak 1: email-compose.service.ts (per-account user composer)
Built its own nodemailer transporter and called sendMail() directly,
bypassing the centralized sendEmail()'s redirect. Now mirrors the same
redirect: when EMAIL_REDIRECT_TO is set, "to" is rewritten, "cc" is
dropped, and the subject is prefixed with "[redirected from <orig>]".
Leak 2: documenso-client.sendDocument()
Tells Documenso to actually email the document. Recipient emails were
rerouted at create-time (in pass-3) but a document created BEFORE the
redirect was turned on could still trigger a real-client email. Now
short-circuited when the redirect is set — returns the existing doc
shape so downstream code doesn't see an unexpected null.
Leak 3: documenso-client.sendReminder()
Same shape as sendDocument: emails a stored recipient address that may
predate the redirect. Now short-circuits with a warn-level log.
Tests (tests/unit/comms-safety.test.ts):
- createDocument rewrites recipients
- generateDocumentFromTemplate rewrites both v1.13 formValues.*Email
keys AND v2.x recipients[] arrays
- sendDocument is short-circuited (no /send call)
- sendReminder is short-circuited (no /remind call)
- createDocument passes through unchanged when redirect unset
- sendEmail rewrites to + subject for single recipient
- sendEmail handles array of recipients (joined into subject prefix)
- sendEmail passes through unchanged when redirect unset
- Webhook worker reads process.env.EMAIL_REDIRECT_TO at dispatch time
(no module-level caching that could miss a runtime flip)
Live smoke (scripts/smoke-test-redirect.ts):
Monkey-patches nodemailer.createTransport, calls the real sendEmail()
with a fake real-client address, verifies the captured outbound has
the right "to" + subject. Run: `pnpm tsx scripts/smoke-test-redirect.ts`.
Exits non-zero if the redirect failed for any reason — drop-in for a
pre-deploy check.
Verification:
pnpm exec tsc --noEmit — 0 errors
pnpm exec vitest run — 936/936 (was 926, +10 new safety tests)
pnpm tsx scripts/smoke-test-redirect.ts — PASS
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
cb83b09b2d |
Merge feat/dedup-migration: client dedup library + NocoDB migration script + admin queue
# Conflicts: # .gitignore # src/lib/db/migrations/meta/_journal.json |
||
|
|
49d92234dd |
fix(test): align stage names with consolidated pipeline enum
Followup to
|
||
|
|
4bcc7f8be6 |
feat(dedup): runtime surfaces — merge service, at-create suggestion, admin queue (P2)
Adds the live dedup pipeline on top of the P1 library + P3 migration
script. The new `client/interest` model now actively prevents duplicate
client records at creation time and gives admins a queue to triage
the borderline pairs the at-create check missed.
Three layers, per design §7:
Layer 1 — At-create suggestion
==============================
`GET /api/v1/clients/match-candidates`
Accepts free-text email / phone / name from the in-flight client
form, normalizes them via the dedup library, and returns scored
matches against the port's live client pool. Filters out
low-confidence noise (the background scoring queue picks those up
separately). Strict port scoping; never leaks across tenants.
`<DedupSuggestionPanel>` (`src/components/clients/dedup-suggestion-panel.tsx`)
Debounced React Query hook. Renders nothing for short inputs or
no useful match. On a high-confidence match it interrupts visually
with an amber-tinted card and a "Use this client" primary button.
Medium confidence falls back to a softer "possible match — check
before creating" treatment.
`<ClientForm>`
Renders the panel above the form (create path only — skipped on
edit). New `onUseExistingClient` callback fires when the user
picks the existing client; the form closes and the parent decides
what to do (typically: navigate to that client's detail page or
open the create-interest dialog pre-filled).
Layer 2 — Merge service
=======================
`mergeClients` (`src/lib/services/client-merge.service.ts`)
The atomic merge primitive that everything else calls. Single
transaction. Per §6 of the design:
- Locks both rows (FOR UPDATE) so concurrent merges of the same
loser fail with a clear error rather than racing.
- Snapshots the full loser state (contacts / addresses / notes /
tags / interest+reservation IDs / relationship rows) into the
`client_merge_log.merge_details` JSONB column for the eventual
undo flow.
- Reattaches every loser-side row to the winner: interests,
reservations, contacts (skipping duplicates by `(channel, value)`),
addresses, notes, tags (deduped), relationships.
- Optional `fieldChoices` — per-scalar overrides letting the user
keep the loser's value for fullName / nationality / preferences /
timezone / source.
- Marks the loser archived with `mergedIntoClientId` set (a redirect
pointer for stragglers; never hard-deleted within the undo window).
- Resolves any matching `client_merge_candidates` row to status='merged'.
- Writes audit log entry.
Schema additions:
- `clients.merged_into_client_id` (nullable text, indexed) — the
redirect pointer set on archive.
Tests: 6 cases against a real DB — happy path moves rows + writes log;
self-merge / cross-port / already-merged refused; duplicate-contact
deduped on reattach; fieldChoices copies loser values to winner.
Layer 3 — Admin review queue
============================
`GET /api/v1/admin/duplicates`
Pending merge candidates (status='pending') for the current port,
with both client summaries hydrated for side-by-side rendering.
Skips pairs where one side is already archived/merged.
`POST /api/v1/admin/duplicates/[id]/merge`
Confirms a candidate. Body picks the winner; the other side
becomes the loser. Calls into `mergeClients` — the only path that
writes `client_merge_log`.
`POST /api/v1/admin/duplicates/[id]/dismiss`
Marks the candidate dismissed. Future scoring runs skip the same
pair until a score change recreates the row.
`<DuplicatesReviewQueue>` (`/admin/duplicates`)
Side-by-side card UI for each pending pair. Click a card to pick
the winner; the other side is automatically the loser. Toolbar:
"Merge into selected" + "Dismiss". No per-field merge editor in
this PR — that's a future polish; the simple "pick the better row"
flow handles ~80% of cases.
Test coverage
=============
11 new integration tests (76 added in this branch total):
- 6 mergeClients (atomicity, refusal cases, contact dedup,
fieldChoices)
- 5 match-candidates API (shape, port scoping, confidence tiers,
Pattern F false-positive guard)
Full vitest: 926/926 passing (was 858 before the dedup branch).
Lint: clean. tsc: clean for new files (only pre-existing errors in
unrelated `tests/integration/` files remain, same as before this PR).
Out of scope, deferred
======================
- Background scoring cron that populates `client_merge_candidates`
(the queue is empty until this lands; manual seeding works for
now via the at-create flow).
- Side-by-side per-field merge editor with checkboxes (the simple
"pick the winner" UX shipped here covers ~80% of real cases).
- Admin settings UI for tuning the dedup thresholds. Defaults from
the design (90 / 50) are baked in for now.
- `unmergeClients` (the snapshot is captured in client_merge_log;
the undo endpoint just hasn't been wired yet).
These are all natural follow-up PRs that don't block shipping the
runtime UX.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
18e5c124b0 |
feat(dedup): NocoDB migration script + tables (P3 dry-run)
Lands the one-shot migration pipeline from the legacy NocoDB Interests base into the new client/interest schema. Dry-run mode is fully operational: pulls the live snapshot, runs the dedup library, and writes a CSV + Markdown report under .migration/<timestamp>/. The --apply phase is stubbed for a follow-up PR per the design's P3 implementation sequence. Schema additions ================ - `client_merge_candidates` — pairs flagged by the background scoring job for the /admin/duplicates review queue. Status enum: pending / dismissed / merged. Unique-(portId, clientAId, clientBId) so the same pair can't surface twice. Empty until P2 lands the cron. - `migration_source_links` — idempotency ledger. Maps source-system rows (NocoDB Interest #624 → new client UUID) so re-running --apply against the same dry-run report skips already-imported entities. Both tables ship with the migration `0020_unusual_azazel.sql` — already applied to the local dev DB during this commit's preparation. Library ======= src/lib/dedup/nocodb-source.ts Read-only adapter for the legacy NocoDB v2 API. xc-token auth, auto-paginates until isLastPage, captures the table IDs from the 2026-05-03 audit. `fetchSnapshot()` pulls every relevant table in parallel into one in-memory object the transform layer consumes. src/lib/dedup/migration-transform.ts Pure function: NocoDB snapshot in, MigrationPlan out. Per row: - normalizes name / email / phone / country via the dedup library - parses the legacy DD-MM-YYYY / DD/MM/YYYY / ISO date formats - maps the 8-stage `Sales Process Level` enum to the new 9-stage pipelineStage - filters yacht-name placeholders ('TBC', 'Na', etc.) - merges Internal Notes + Extra Comments + Berth Size Desired into a single notes blob Then runs `findClientMatches` pairwise (with blocking) and union-finds clusters of rows whose score crosses the auto-link threshold (90). Lower-scoring pairs (50–89) become 'needs review'. Each cluster's "lead" row is picked by completeness score with recency tie-break. src/lib/dedup/migration-report.ts Writes three artifacts to .migration/<timestamp>/: - report.csv — one row per planned op, RFC-4180 escaped - summary.md — human-skimmable overview - plan.json — full structured plan for the --apply phase CSV cells with comma / quote / newline are quoted; internal quotes are doubled. No external CSV dep. src/lib/dedup/phone-parse.ts Script-safe wrapper around libphonenumber-js's `core` entry that loads `metadata.min.json` directly. The default `index.cjs.js` bundled by libphonenumber hits a metadata-shape interop bug under Node 25 + tsx (`{ default }` wrapping); core+JSON sidesteps it. The dedup `normalizePhone` and `find-matches` both use this wrapper now so the same code path runs in vitest, Next.js, and the migration CLI without surprises. src/lib/dedup/normalize.ts Tightened country resolution: added Caribbean short-form aliases ('antigua' → AG, 'st kitts' → KN, etc.) and a city map covering the US locations seen in the NocoDB dump (Boston, Tampa, Fort Lauderdale, Port Jefferson, Nantucket). Also relaxed phone parsing to drop the `isValid()` strict check — the libphonenumber min build rejects many real NANP-territory numbers, and dedup only needs a canonical E.164 to compare. CLI === scripts/migrate-from-nocodb.ts pnpm tsx scripts/migrate-from-nocodb.ts --dry-run → Pulls the live NocoDB base (NOCODB_URL + NOCODB_TOKEN env vars), runs the transform, writes report. No DB writes. pnpm tsx scripts/migrate-from-nocodb.ts --apply --report .migration/<dir>/ → Stubbed; exits with `not yet implemented` and a pointer to the design doc. Apply phase ships in a follow-up. Tests ===== tests/unit/dedup/migration-transform.test.ts (7 cases) Fixture-based regression. A frozen 12-row NocoDB snapshot covers every duplicate pattern in the design (§1.2). The test asserts: - 12 input rows → 7 unique clients (cluster math is right) - Patterns A / B / C / E auto-link - Pattern F (Etiennette Clamouze) does NOT auto-link - Every interest preserved as its own row even when clients merge - 8-stage → 9-stage enum mapping is correct per spec - Multi-yacht merge (Constanzo CALYPSO + Costanzo GEMINI under one client) — the design's signature win - Output is deterministic (run twice, identical) Validation against real data ============================ Ran `pnpm tsx scripts/migrate-from-nocodb.ts --dry-run` against the live NocoDB. Result on 252 Interests rows: - 237 clients (15 merged into 13 clusters) - 252 interests (one per source row) - 406 contacts, 52 addresses - 13 auto-linked clusters (every confirmed cluster from §1.2 audit) - 3 pairs flagged for review (Camazou, Zasso, one new) - 1 phone placeholder flagged Total dedup test count: 57 (50 from P1 + 7 fixture tests). Lint: clean. Tsc: clean for new files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
8b077e1999 |
feat(dedup): normalization + match-finding library (P1)
The pure-logic spine of the client deduplication system spec'd in
docs/superpowers/specs/2026-05-03-dedup-and-migration-design.md.
Two modules, JSX-free, vitest-tested against fixtures drawn directly
from real dirty values observed in the legacy NocoDB Interests audit.
src/lib/dedup/normalize.ts
- normalizeName: trims whitespace, replaces \r/\n/\t, intelligently
title-cases ALL-CAPS surnames while keeping particles (van / de /
dalla / etc.) lowercase mid-name. Preserves Irish O' surnames and
the "slash-with-company" structure ("Daniel Wainstein / 7 Knots,
LLC") seen in production. Returns a surnameToken (lowercased last
non-particle token) for use as a dedup blocking key.
- normalizeEmail: trim + lowercase + zod email validation. Plus-aliases
preserved; null on invalid.
- normalizePhone: pre-cleans the input (strips spreadsheet apostrophes,
carriage returns, dots/dashes/parens, converts 00 prefix to +) then
delegates to libphonenumber-js. Detects multi-number fields ("a/b",
"a;b") and placeholder fakes (8+ consecutive zeros, e.g.
+447000000000). Flags every quirk so the migration report and runtime
audit log can surface it.
- resolveCountry: maps free-text country/region input to ISO-3166-1
alpha-2 via alias → exact (vs. Intl-derived names) → city → fuzzy
(Levenshtein ≤ 2). Fuzzy is gated by length so 4-char inputs ("Mars")
don't false-positive against short country names.
- levenshtein: standard iterative implementation, exported for reuse
by find-matches.
src/lib/dedup/find-matches.ts
- findClientMatches: builds three blocking indexes off the pool (email
/ phone / surname-token), gathers the comparison set via union, and
scores each candidate via the rule set in design §4.2:
Email match +60
Phone E.164 match +50 (≥ 8 digits, excludes placeholder zeros)
Name exact match +20
Surname + given fuzzy +15 (Levenshtein ≤ 1)
Negative: shared email but different phone country −15
Negative: name match but no shared contact −20
Score is clamped to [0,100]. Confidence tier ('high' / 'medium' /
'low') is derived from configurable thresholds passed in by the
caller — defaults are highScore=90, mediumScore=50.
tests/unit/dedup/normalize.test.ts (38 cases)
Every dirty-data pattern from design §1.3 has a fixture: carriage
returns in names, ALL-CAPS surnames, lowercase entries, particles,
slash-with-company, plus-aliases, capitalized email localparts,
spreadsheet-apostrophe phones, multi-number phones, placeholder
phones, 00-prefix phones, French/UK local-format phones,
Saint-Barthélemy diacritic variants, Kansas City fallback.
tests/unit/dedup/find-matches.test.ts (12 cases)
Each duplicate cluster from design §1.2 has a test:
- Pattern A (Deepak Ramchandani — pure double-submit) → high
- Pattern B (Howard Wiarda — phone format variance) → high
- Pattern C (Nicolas Ruiz — name capitalization) → high
- Pattern D (Chris/Christopher Allen — name shortening) → high
- Pattern E (Christopher Camazou — typo on resubmit) → high or medium
- Pattern E (Constanzo/Costanzo — surname typo, multi-yacht) → high
- Pattern F (Etiennette Clamouze — same name, different country) →
must NOT auto-merge
- Pattern F (Bruno+Bruce — shared household contact) → no match
- Negative evidence (same email, different phone country) → medium
- Blocking (no shared keys → 0 matches)
- Sort order (high before low)
- Empty pool
Total: 50 new tests, all green. Zero changes to runtime behavior or
schema; unblocks P2 (runtime surfaces) and P3 (NocoDB migration).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
e2398099c4 |
test(audit-fixes): cover the new permission and webhook surfaces
Adds integration coverage for the routes / handlers shipped in the preceding audit-fix commits, plus refactors two route files to expose inner handlers from a sibling `handlers.ts` (the pattern used elsewhere in `src/app/api/v1`) so tests can call them without the `withAuth(withPermission(…))` wrapper. New tests (18 cases across 4 files): - `tests/integration/portal-auth.test.ts` (6) — verifyPortalToken rejects tokens missing `aud: 'portal'` or `iss: 'pn-crm'`, with the wrong audience (CRM-session-replay shape) or wrong issuer, plus a round-trip happy path. Locks in the portal-vs-CRM token isolation. - `tests/integration/api/saved-views-ownership.test.ts` (6) — patch and delete handlers return 403 for a different user, 404 for an unknown id or cross-port id, and 200 for the owner. Ownership is enforced at the route layer regardless of the service's internal filtering. - `tests/integration/api/berth-reservations-list.test.ts` (3) — the new global list returns rows for the current port only and honors pagination params. A reservation in a different port never leaks. - `tests/integration/documents-expired-webhook.test.ts` (3) — handleDocumentExpired flips the document to `expired`, also flips the linked interest's `eoiStatus`, writes a `documentEvents` row, and is a no-op (not a throw) when the documensoId is unknown. Refactors: - `src/app/api/v1/saved-views/[id]/route.ts` extracts `patchHandler` / `deleteHandler` (and the shared `assertViewOwner`) into `handlers.ts`. The route file is now a 4-line `withAuth(handler)` wrapper. - `src/app/api/v1/berth-reservations/route.ts` extracts `listHandler` similarly. Tests import directly from `handlers.ts`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d364b09885 |
fix(realtime): keep socket through reconnects, stop re-subscribe storm
Two correctness bugs in the real-time stack — both silent failures, both
session-wide once they trigger.
(1) `SocketProvider` was setting the React context to null on every
`disconnect` event. socket.io's built-in reconnection re-establishes the
underlying transport and replays handlers, but the React tree had
already lost its reference to the socket — so every `useSocket()`
consumer saw null until a session/port change forced a remount. Effect:
after the first transient drop (laptop sleep, wifi blip, server
restart), realtime invalidation and toasts went dead session-wide with
no user-visible signal.
Fix: keep the socket reference stable for the lifetime of the
session+port, and surface a separate `isConnected` boolean for any UI
that wants to render an offline indicator. Exposed as a new
`useIsSocketConnected()` hook; `useSocket()` signature is unchanged.
(2) `useRealtimeInvalidation` captured `eventMap` as a useEffect
dependency. Every caller passes a fresh `{ ... }` object literal on each
render, so the effect re-ran every render → `socket.off`/`socket.on`
storm on pages with many subscribed events.
Fix: extract the subscription logic into a pure helper
(`realtime-invalidation-core.ts`, JSX-free for vitest). The hook now
keeps the latest map in a ref and only re-subscribes when the SET of
event names changes (joined-keys signature, not object identity). The
handler reads `ref.current` at fire time, so callers still see fresh
queryKey lists without re-binding.
Helper is unit-tested with a stub socket: registration count,
fire-time map lookup, cleanup deregistration, missing-event safety.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
d197f8b321 |
feat(eoi): align prerequisites with EOI document structure
Match the gate to the actual EOI's structure (Section 2 vs Section 3) so
the rep can generate the document the moment they have what they need —
and not before.
Required (Section 2 — top paragraph):
- Client name
- Client primary email
- Client primary address
Optional (Section 3 — left blank when absent):
- Linked yacht (name, dimensions)
- Linked berth (mooring number)
Previously the dialog blocked generation unless yacht AND berth were both
linked, which was overzealous — early-stage EOIs are routinely sent before
a specific berth is pinned down.
- eoi-context.ts: yacht and berth are now nullable in the returned
context. The hard ValidationError is now driven by the EOI's Section
2 fields (name/email/address) rather than yacht/berth presence. The
owner block falls back to the interest's client when no yacht is
linked, so signing parties remain resolvable.
- documenso-payload.ts + fill-eoi-form.ts: Section 3 form values
render as empty strings when yacht or berth are absent, so the
rendered PDF leaves those template inputs blank.
- document-templates.ts: yacht.* and berth.* tokens fall back to
empty strings; the legacy-fallback catch handler also recognises
the new "missing required client details" error.
- interests.service.ts: getInterestById now also returns
`clientPrimaryEmail` and `clientHasAddress` so the Documents tab
can compute the EOI prerequisites checklist client-side without an
extra fetch.
- eoi-generate-dialog.tsx: prereqs split into two groups visually —
Required (with red ✗ when missing) and Optional (with grey – when
absent). The Generate button only requires the Required block to
pass. A small amber banner surfaces when Required is incomplete so
the rep knows where to add the missing data.
Tests: 835/835 pass. Replaces the obsolete "throws on missing yacht/
berth" tests with parity coverage for the new behaviour ("builds a
valid context when yacht/berth missing", "throws when client email/
address missing"). Adds a payload test for the empty-Section-3 case.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
886119cbde |
refactor(sales): consolidate pipeline stages + wire EOI auto-advance
The 8→9 stage refresh from earlier today only updated constants.ts and the DB —
20 component/service files still hardcoded the old enum, leaving labels blank,
filter dropdowns wrong, kanban columns mismatched, and the analytics funnel
silently dropping new-stage rows. The platform also never advanced
pipelineStage on EOI lifecycle events: documents.service.ts wrote eoiStatus
but left the user-visible stage stuck.
This commit closes both gaps:
1. Single source of truth in src/lib/constants.ts — adds STAGE_LABELS,
STAGE_BADGE, STAGE_DOT, STAGE_WEIGHTS, STAGE_TRANSITIONS plus
stageLabel / stageBadgeClass / stageDotClass / safeStage /
canTransitionStage helpers. components/clients/pipeline-constants.ts
becomes a re-export shim so existing imports keep working.
2. 18 stale-enum surfaces migrated — interest list (table, card, filters,
form, stage picker), pipeline board, client card, berth interests tab,
portal client interests page, dashboard pipeline / funnel / revenue-
forecast charts, settings pipeline_weights default, dashboard.service
weights, analytics.service funnel stages, alert-rules stale-interest
filter, interest-scoring stage rank.
3. Documents tab wired into interest detail — replaced the placeholder in
interest-tabs.tsx with InterestDocumentsTab + InterestFilesTab so the
EOI launcher is back where salespeople work.
4. Auto-advance — new advanceStageIfBehind() in interests.service.ts
(forward-only, no-op if interest is already past the target). Called
from documents.service.ts on send (→ eoi_sent), Documenso completed
webhook (→ eoi_signed), and manual signed-EOI upload (→ eoi_signed).
5. Transition guard — canTransitionStage() blocks egregious skips
(e.g. completed → open, open → contract_signed). Enforced in
changeInterestStage before the DB write.
Tests updated to reflect the 9-stage model. tsc clean, vitest 832/832,
ESLint clean on every file touched.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
5d44f3cfa4 |
fix(test): raise mobile-audit timeout to 30min for 4-viewport runs
Task 24 audit run hit the 10-minute test.setTimeout ceiling after capturing 2 of 4 viewport passes (iphone-se complete, iphone-16 complete-ish, 16-pro partial, pro-max not started). 4 viewports × ~45 routes × slowMo: 200 needs more headroom than 600s gave. 30min is comfortable headroom; the per-test project timeout is matched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d0540dca55 |
fix(build): extract route.ts handlers to handlers.ts (CLAUDE.md convention)
8 API route files were exporting handler functions directly from route.ts, which Next.js 15 rejects with "$NAME is not a valid Route export field". Per CLAUDE.md convention, service-tested handler functions live in sibling handlers.ts files and route.ts only re-exports the GET/POST/etc. wrapped in withAuth(withPermission(...)). Discovered during the mobile-foundation Task 24 build validation; the route files predate this branch but the build was never re-run on data-model. Files: - berth-reservations/[id], companies/autocomplete, companies/[id]/members + nested mid/set-primary, yachts/autocomplete, yachts/[id]/transfer, yachts/[id]/ownership-history - Integration tests updated to import from handlers.ts (companies, memberships, reservations, yachts-detail) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
0e9c24e222 |
test(visual): add mobile shell snapshot baselines (dashboard + more sheet)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
3aba2181dc |
feat(test): extract anchor iPhone device descriptors to shared fixture
Move the four iPhone viewport descriptors (SE, 15/16, 16/17 Pro, Pro Max) into tests/e2e/fixtures/devices.ts so the upcoming visual spec (Task 23) can share the same anchors. The mobile-audit spec now spreads each descriptor and adds a slug `name` plus a human `label` for the run header. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
9f786fbcf3 | feat(mobile): set data-form-factor body attr from User-Agent in root layout | ||
|
|
fbb1f1f366 |
scaffold(mobile): branch setup — audit harness, spec, plan, gitignore + client-portal cleanup
Pre-execution baseline for the mobile foundation PR: - Mobile audit harness (tests/e2e/audit/mobile.spec.ts + mobile-audit Playwright project) — visits every page at four anchor iPhone viewports (375/393/402/440), screenshots full-page to .audit/mobile/, generates index.md - Design spec (docs/superpowers/specs/2026-04-29-mobile-optimization-design.md) — adaptive shell + responsive content; full active-iPhone-range coverage; foundation + per-page migration phases - Implementation plan (docs/superpowers/plans/2026-04-29-mobile-foundation.md) — 24 TDD tasks for the foundation PR - .gitignore: ignore /client-portal/ (legacy nested Nuxt repo) and /.audit/ (regenerable screenshots) - Remove phantom client-portal gitlink (mode 160000 with no .gitmodules) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
ba89b61b3f |
fix(security): port-scope clientId/berthId/yachtId on interests + clientRelationships
Pass-6 findings — both MEDIUM cross-tenant FK injection. - interests.service: createInterest/updateInterest/linkBerth accepted clientId/berthId/yachtId from the request body without verifying the referenced row belongs to the caller's port. getInterestById joins clients/berths/yachtTags on these FKs without a port filter, so a port-A caller could splice a foreign-port id and surface that tenant's clientName, mooringNumber, or yacht ownership on read. New assertInterestFksInPort helper guards all three surfaces. - clients.service.createRelationship: accepted clientBId from the body without a port check; the relationship list endpoint joins clients without filtering by port, so the foreign client's name + email would render in the relationships tab. Now verifies clientBId belongs to portId and rejects self-relationships. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4eea19a85b |
sec: lock down 5 cross-tenant FK gaps from fifth-pass review
1. HIGH — reminders.create/updateReminder accepted clientId/interestId/
berthId from the body and persisted them with no port check; getReminder
then hydrated the row via Drizzle relations (no port filter on the
join), so a port-A user with reminders:create could exfiltrate any
port-B client/interest/berth row by guessing its UUID. New
assertReminderFksInPort gates create + update.
2. HIGH — listRecommendations(interestId, _portId) discarded portId
entirely; the route GET /api/v1/interests/[id]/recommendations
forwarded the URL id straight through. A port-A user with
interests:view could read any other tenant's recommended berths
(mooring numbers, dimensions, status). Service now verifies the
interest belongs to portId and joins berths filtered by port.
3. HIGH — Berth waiting list. The PATCH route did not pre-check that
the berth belonged to ctx.portId — a port-A user with
manage_waiting_list could reorder a port-B berth's queue. Separately,
updateWaitingList accepted arbitrary entries[].clientId and inserted
them without verifying tenancy, polluting the table with foreign-port
FKs. Both gaps closed.
4. MEDIUM — setEntityTags (clients/companies/yachts/interests/berths)
accepted any tagId and inserted into the join table. The tags table
is per-port but the join only carries a single-column FK. The
downstream getById join `tags ON join.tag_id = tags.id` has no port
filter, so a foreign tag's name + color render in the requesting port.
Helper now batch-validates tagIds belong to portId before insert.
5. MEDIUM — /api/v1/custom-fields/[entityId] PUT had no withPermission
gate (any role, including viewer, could write) and didn't validate
that the URL entityId pointed at a port-scoped entity of the field
definition's entityType. Route now uses
withPermission('clients','view'/'edit',…); service validates the
entityId per resolved entityType (client/interest/berth/yacht/company)
against portId.
Test mocks updated to cover the new entity-port-scope check.
818 vitest tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
47a1a51832 |
sec: webhook SSRF guard, IMAP-sync owner check, watcher port membership
Three findings from a fourth-pass review: 1. MEDIUM — webhook URL SSRF. The validator only enforced HTTPS+URL parse; it accepted private/loopback/link-local/.internal hosts. The delivery worker fetched arbitrary URLs and persisted up to 1KB of response body into webhook_deliveries.response_body, which is then surfaced via the deliveries listing endpoint — a port admin could register a webhook to an internal HTTPS endpoint, hit the test endpoint to force immediate dispatch, and read the response back. Validator now rejects RFC-1918/loopback/link-local/CGNAT/ULA IPs (v4 + v6) and .internal/.local/.localhost/.lan/.intranet/.corp suffixes; the worker re-resolves the hostname at dispatch time and blocks before fetch (DNS rebinding defense). 21-case unit test covers the matrix. 2. MEDIUM — POST /api/v1/email/accounts/[id]/sync had no owner check. Any user with email:view could enqueue an inbox-sync job for any accountId, which the worker would honour using the foreign user's decrypted IMAP credentials and advance the account's lastSyncAt (data-loss risk on the legitimate owner's next sync). Route now asserts account.userId === ctx.userId before enqueueing, matching the toggle/disconnect endpoints. 3. MEDIUM — addDocumentWatcher (and the wizard / upload watcher inserts) didn't validate the watcher's userId belonged to the document's port. notifyDocumentEvent then emitted a real-time socket toast + email containing the document title to the foreign user. New assertWatchersInPort helper verifies each candidate has a userPortRoles row for the port (super-admin bypass). 818 vitest tests pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
e06fb9545b |
sec: lock down 5 cross-tenant IDORs uncovered in second-pass review
1. HIGH — /api/v1/admin/ports/[id] PATCH+GET let any port-admin (manage_settings) mutate any other tenant's port row by passing the foreign id in the path. Now non-super-admins must target their own ctx.portId; listPorts and createPort are super-admin only. 2. HIGH — Invoice create/update accepted arbitrary expenseIds and linked them into invoice_expenses with no port check; the GET response then re-emitted those foreign expense rows via the linkedExpenses join. assertExpensesInPort now validates each id belongs to the caller's portId before insert; getInvoiceById's join filters by expenses.portId as defense-in-depth. 3. HIGH — Document creation paths (createDocument, createFromWizard, createFromUpload) persisted user-supplied clientId/interestId/ companyId/yachtId/reservationId without verifying those FKs were in-port. sendForSigning then loaded the foreign client/interest by id alone and pushed their PII into the Documenso payload. New assertSubjectFksInPort helper rejects out-of-port FKs at create time; sendForSigning's interest+client lookups now also filter by portId. 4. MEDIUM — calculateInterestScore read its redis cache before verifying portId, and the cache key was interestId-only — a foreign-port caller could observe a cached score breakdown. Cache key now includes portId, and the port-scope DB lookup runs before any cache.get. 5. MEDIUM — AI email-draft job results were retrievable by anyone who could guess the BullMQ jobId (default sequential integers). Job ids are now random UUIDs, requestEmailDraft validates interestId/ clientId belong to ctx.portId before enqueueing, the worker's client lookup is port-scoped, and getEmailDraftResult requires the caller to match the original requester's userId+portId before returning the drafted subject/body. The interest-scoring unit test that asserted "DB is bypassed on cache hit" is updated to reflect the new (security-correct) ordering. Two new regression test files cover the email-draft binding (5 tests). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4c5334d471 |
sec: gate super-admin invite minting, OCR settings, and alert mutations
Three findings from the branch security review:
1. HIGH — Privilege escalation via super-admin invite. POST
/api/v1/admin/invitations was gated only by manage_users (held by the
port-scoped director role). The body schema accepted isSuperAdmin
from the request, createCrmInvite persisted it verbatim, and
consumeCrmInvite copied it into userProfiles.isSuperAdmin — granting
the new account cross-tenant access. Now the route rejects
isSuperAdmin=true unless ctx.isSuperAdmin, and createCrmInvite
requires invitedBy.isSuperAdmin as defense-in-depth.
2. HIGH — Receipt-image exfiltration via OCR settings. The route
/api/v1/admin/ocr-settings (and the sibling /test) were wrapped only
in withAuth — any port role including viewer could PUT a swapped
provider apiKey + flip aiEnabled, redirecting every subsequent
receipt scan to attacker infrastructure. Both are now wrapped in
withPermission('admin','manage_settings',…) matching the sibling
admin routes (ai-budget, settings).
3. MEDIUM — Cross-tenant alert IDOR. dismissAlert / acknowledgeAlert
issued UPDATE … WHERE id=? with no portId predicate. Any
authenticated user with a foreign alert UUID could mutate it. Both
service functions now require portId and add it to the WHERE; the
route handlers pass ctx.portId.
The dev-trigger-crm-invite script passes a synthetic super-admin caller
identity since it runs out-of-band.
The two public-form tests randomize their IP prefix per run so a fresh
test process doesn't collide with leftover redis sliding-window entries
from a prior run (publicForm limiter pexpires after 1h).
Two new regression test files cover the fixes (6 tests).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
61e40b5e76 |
chore(ops): split /api/health (liveness) from /api/ready (readiness)
Previously /api/health did deep dependency probes (postgres + redis + minio) and 503'd on any failure. That's readiness behavior, not liveness — a transient Redis/MinIO blip would tell the orchestrator to restart the pod when it should only be dropped from the load balancer. Make /api/health a thin liveness check (returns 200 unconditionally if the process is responding) and move the deep checks to a new /api/ready endpoint with the canonical Kubernetes-style 200/503 contract. Docker-compose healthchecks keep pointing at /api/health, which is now more conservative (no false-positive container restarts). Documenso/SMTP are intentionally not probed in /api/ready: each tenant configures its own credentials and a tenant misconfiguration shouldn't deadline the entire shared CRM. Also tighten the gdpr-bundle-builder casts: replace the scattered `as unknown as Record<string, unknown>` double-casts with a small `toJsonRow<T>()` helper that does the widen narrow→wide in one place with one cast hop instead of two. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
43f68ca093 |
chore(hardening): maintenance jobs, defense-in-depth, redis-backed public rate limit
- maintenance worker now expires GDPR export bundles (db row + MinIO object) on the gdpr_exports.expires_at boundary, plus 90-day retention sweep on ai_usage_ledger; both jobs scheduled daily. - portId scoping added to listClientRelationships and listClientExports (defense-in-depth — parent-resource gates already prevent cross-tenant reads, but service layer should enforce on its own). - SELECT FOR UPDATE on parent client/company row inside add/update address transactions to serialize concurrent isPrimary toggles. - public /interests + /residential-inquiries endpoints swap their in-memory ipHits maps for the redis sliding-window limiter via the new rateLimiters.publicForm config (5/hr/IP), so the cap survives restarts and is shared across worker processes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
a3305a94f3 |
feat(gdpr): staff-triggered client-data export bundle (Article 15)
Adds a full GDPR Article 15 (right of access) workflow. Staff trigger an export from the client detail; a BullMQ worker assembles every row keyed to that client (profile, contacts, addresses, notes, tags, yachts, company memberships, interests, reservations, invoices, documents, last 500 audit events) into JSON + a self-contained HTML report, ZIPs them, uploads to MinIO, and optionally emails the client a 7-day signed download link. - New table gdpr_exports tracks lifecycle (pending → building → ready → sent / failed) with a 30-day cleanup target - Bundle builder (gdpr-bundle-builder.ts) — pure read-side, tenant- scoped, with HTML escaping to block injection from rogue field values - Worker hook in export queue dispatches on job name 'gdpr-export' - New audit actions: 'request_gdpr_export', 'send_gdpr_export' - API: POST/GET /api/v1/clients/:id/gdpr-export (admin-gated, exports rate-limit, Article-15 audit on POST); GET /:exportId returns a fresh signed URL - UI: <GdprExportButton> dialog on client detail header — admin-only, shows recent exports, supports email-to-client + override recipient, polls every 5s while open - Validation: refuses email-to-client when no primary email + no override (rather than silently dropping the send) Tests: 778/778 vitest (was 771) — +7 covering builder happy path, HTML escaping, tenant isolation, empty client, request-flow validation, and audit / queue interaction. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
9dfa04094b |
feat(rate-limit): per-user limiters for OCR, AI, and exports
Adds three named rate limiters to the existing Redis sliding-window catalog and a withRateLimit wrapper that composes inside withAuth. Wires the OCR limiter into the receipt-scan endpoint so a runaway client can't burn through the AI budget in a tight loop. - ocr: 10/min/user - ai: 60/min/user (reserved for future server-side AI surfaces) - exports: 30/hour/user (reserved for GDPR bundle, PDF, CSV exports) 429 responses include X-RateLimit-* headers and a Retry-After hint. Tests: 771/771 vitest (was 766) — +5 rate-limit tests covering catalog shape, sliding window, cross-prefix isolation, cross-user isolation, and resetAt timestamp. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
e7d23b254c |
feat(ai): per-port token budgets + usage ledger for AI features
Adds a token-denominated guardrail in front of every server-side AI call so a misconfigured port can't run up an unbounded bill. Soft caps surface a banner; hard caps refuse new requests until the period rolls over. Usage flows into a feature-typed ledger so future AI surfaces (summary, embeddings, reply-draft) can drop in without schema changes. - New table ai_usage_ledger (port, user, feature, provider, model, input/output/total tokens, request id) with two indexes for rollup - New service ai-budget.service.ts: getAiBudget/setAiBudget, checkBudget (pre-flight gate), recordAiUsage, currentPeriodTokens, periodBreakdown — all token-based, period boundaries in UTC - runOcr now returns provider usage so the route can record the actual spend instead of estimating - Scan-receipt route gates on checkBudget before invoking AI; returns source: manual / reason: budget-exceeded when blocked, surfaces softCapWarning on the success path - Admin UI: new AiBudgetCard on the OCR settings page — shows current spend, per-feature breakdown, soft/hard cap inputs, period selector - Permission: admin.manage_settings on both routes Tests: 766/766 vitest (was 756) — +10 budget tests covering enforce/ disabled/cap-exceed/estimate-exceed/soft-warn/period boundaries/ cross-port isolation/silent ledger failure. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |