Files
pn-new-crm/src/lib/db/migrations/meta/_journal.json

231 lines
4.6 KiB
JSON
Raw Normal View History

{
"version": "7",
"dialect": "postgresql",
"entries": [
{
"idx": 0,
"version": "7",
"when": 1776185027494,
"tag": "0000_narrow_longshot",
"breakpoints": true
},
{
"idx": 1,
"version": "7",
"when": 1776185487775,
"tag": "0001_soft_ender_wiggin",
"breakpoints": true
},
{
"idx": 2,
"version": "7",
"when": 1776958500747,
"tag": "0002_groovy_excalibur",
"breakpoints": true
},
{
"idx": 3,
"version": "7",
"when": 1776959610819,
"tag": "0003_opposite_lucky_pierre",
"breakpoints": true
},
{
"idx": 4,
"version": "7",
"when": 1776959707066,
"tag": "0004_nasty_warstar",
"breakpoints": true
},
{
"idx": 5,
"version": "7",
"when": 1776959832091,
"tag": "0005_stale_kronos",
"breakpoints": true
},
{
"idx": 6,
"version": "7",
"when": 1776959911400,
"tag": "0006_great_pixie",
"breakpoints": true
},
{
"idx": 7,
"version": "7",
"when": 1776959993173,
"tag": "0007_brainy_felicia_hardy",
"breakpoints": true
refactor(clients): drop deprecated yacht/company/proxy columns PR 13: now that all reads are migrated to the dedicated yacht / company / membership entities, drop the columns that mirrored them on `clients`: companyName, isProxy, proxyType, actualOwnerName, relationshipNotes, yachtName, yachtLength{Ft,M}, yachtWidth{Ft,M}, yachtDraft{Ft,M}, berthSizeDesired. Migration `0008_loud_ikaris.sql` issues the destructive ALTER TABLE DROP COLUMN statements. Run `pnpm db:push` (or the migration runner) to apply. Caller cleanup (zero behavioral change to remaining flows): - Drops the legacy `generateEoi` flow entirely (route, service function, pdfme template, validator schema). The dual-path generate-and-sign service from PR 11 has fully replaced it; the route was no longer wired to the UI. - `clients.service`: company-name search column / WHERE / audit value removed; search now ranks by full name only. - `interests.service`: `resolveLeadCategory` reads dimensions from `yachts` via `interest.yachtId` instead of the dropped `client.yachtLength{Ft,M}`. - `record-export`: client-summary now lists yachts via owner-side lookup (direct + active company memberships); interest-summary fetches yacht via `interest.yachtId`. Both PDF templates updated to read yacht details from the new entity. - `client-detail-header`, `client-picker`, `command-search`, `search-result-item`, `use-search` hook, `types/domain.ts`, `search.service` — drop the companyName badge / sub-label / typed field everywhere it was rendered or fetched. - `ai.ts` worker: drop the company / yacht context lines from the prompt (will be re-added later sourced from the new entities). - `validators/interests.ts`: remove the deprecated public-form flat yacht/company fields. The route already ignores them. - `factories.ts`: drop the `isProxy: false` default. Tests: 652/652 green; type-check clean. The `security-sensitive-data` tests use `companyName` / `isProxy` as arbitrary record keys for a generic util — left unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 13:57:54 +02:00
},
{
"idx": 8,
"version": "7",
"when": 1777204563579,
"tag": "0008_loud_ikaris",
"breakpoints": true
feat(portal): replace magic-link with email/password + admin-initiated activation The client portal no longer uses passwordless / magic-link sign-in. Each client now has a `portal_users` row with a scrypt-hashed password, created by an admin from the client detail page; the admin's invite mails an activation link that the client uses to set their own password. Forgot-password is wired through the same token mechanism. Schema (migration `0009_outgoing_rumiko_fujikawa.sql`): - `portal_users` — one per client account, separate from the CRM `users` table (better-auth) so the auth realms stay isolated. Email is globally unique, password is null until activation. - `portal_auth_tokens` — single-use activation / reset tokens. Stores only the SHA-256 hash so a DB compromise never leaks live tokens. Services: - `src/lib/portal/passwords.ts` — scrypt hash/verify (no new deps; uses node:crypto), token mint+hash helpers. - `src/lib/services/portal-auth.service.ts` — createPortalUser, resendActivation, activateAccount, signIn (timing-safe), requestPasswordReset, resetPassword. Auth failures throw the new UnauthorizedError (401); enumeration-safe behaviour everywhere. Routes: - POST /api/portal/auth/sign-in — sets the existing portal JWT cookie. - POST /api/portal/auth/forgot-password — always 200. - POST /api/portal/auth/reset-password — token + new password. - POST /api/portal/auth/activate — token + initial password. - POST /api/v1/clients/:id/portal-user — admin invite (and `?action=resend`). - Removed: /api/portal/auth/request, /api/portal/auth/verify (magic link). UI: - /portal/login — replaced email-only magic-link form with email + password + "forgot password" link. - /portal/forgot-password, /portal/reset-password, /portal/activate — new. - New shared `PasswordSetForm` component used by activate + reset. - New `PortalInviteButton` rendered on the client detail header. Email send: - `createTransporter` now wires SMTP auth when SMTP_USER+SMTP_PASS are set (gmail app-password or marina-server creds, configured via env). - `SMTP_FROM` env var lets the sender address be overridden without pinning it to `noreply@${SMTP_HOST}`. Tests: - Smoke spec 17 (client-portal) updated to the new flow: 7/7 green. - Smoke specs 02-crud-spine, 05-invoices, 20-critical-path updated to match the post-refactor client + invoice forms (drop companyName, use OwnerPicker + billingEmail). - Vitest 652/652 still green; type-check clean. Drops the dead `requestMagicLink` from portal.service.ts. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 15:34:02 +02:00
},
{
"idx": 9,
"version": "7",
"when": 1777210206070,
"tag": "0009_outgoing_rumiko_fujikawa",
"breakpoints": true
feat(platform): residential module + admin UI + reliability fixes Residential platform - New schema: residentialClients, residentialInterests (separate from marina/yacht clients) with migration 0010 - Service layer with CRUD + audit + sockets + per-port portal toggle - v1 + public API routes (/api/v1/residential/*, /api/public/residential-inquiries) - List + detail pages with inline editing for clients and interests - Per-user residentialAccess toggle on userPortRoles (migration 0011) - Permission keys: residential_clients, residential_interests - Sidebar nav + role form integration - Smoke spec covering page loads, UI create flow, public endpoint Admin & shared UI - Admin → Forms (form templates CRUD) with validators + service - Notification preferences page (in-app + email per type) - Email composition + accounts list + threads view - Branded auth shell shared across CRM + portal auth surfaces - Inline editing extended to yacht/company/interest detail pages - InlineTagEditor + per-entity tags endpoints (yachts, companies) - Notes service polymorphic across clients/interests/yachts/companies - Client list columns: yachtCount + companyCount badges - Reservation file-download via presigned URL (replaces stale <a href>) Route handler refactor - Extracted yachts/companies/berths reservation handlers to sibling handlers.ts files (Next.js 15 route.ts only allows specific exports) Reliability fixes - apiFetch double-stringify bug fixed across 13 components (apiFetch already JSON.stringifies its body; passing a stringified body produced double-encoded JSON which failed zod validation) - SocketProvider gated behind useSyncExternalStore-based mount check to avoid useSession() SSR crashes under React 19 + Next 15 - apiFetch falls back to URL-pathname → port-id resolution when the Zustand store hasn't hydrated yet (fresh contexts, e2e tests) - CRM invite flow (schema, service, route, email, dev script) - Dashboard route → [portSlug]/dashboard/page.tsx + redirect - Document the dev-server restart-after-migration gotcha in CLAUDE.md Tests - 5-case residential smoke spec - Integration test updates for new service signatures Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 21:54:32 +02:00
},
{
"idx": 10,
"version": "7",
"when": 1777303428222,
"tag": "0010_brave_joshua_kane",
"breakpoints": true
},
{
"idx": 11,
"version": "7",
"when": 1777307410311,
"tag": "0011_red_cargill",
"breakpoints": true
},
{
"idx": 12,
"version": "7",
"when": 1777308900666,
"tag": "0012_large_zarda",
"breakpoints": true
},
{
"idx": 13,
"version": "7",
"when": 1777334766194,
"tag": "0013_abnormal_thundra",
"breakpoints": true
feat(insights): Phase B schema + service skeletons PR1 of Phase B per docs/superpowers/specs/2026-04-28-phase-b-insights-alerts-design.md. Lays the foundation that PRs 2-10 will fill in with behaviour. Schema (migration 0014): - alerts table with rule-engine fields (rule_id, severity, link, entity_type/id, fingerprint, fired/dismissed/acknowledged/resolved timestamps, jsonb metadata). Partial-unique fingerprint index keeps one open row per (port, rule, entity); separate indexes power severity-filtered and time-ordered queries. - analytics_snapshots (port_id, metric_id) -> jsonb cache + computedAt for the 15-min recurring refresh. - expenses: duplicate_of self-FK, dedup_scanned_at, ocr_status/raw/ confidence; partial index on (port, vendor, amount, date) where duplicate_of IS NULL drives the dedup heuristic. - audit_logs.search_text: GENERATED ALWAYS tsvector over action+entity_type+entity_id+user_id, GIN-indexed (drizzle can't model GENERATED ALWAYS in TS yet, so the migration appends manual ALTER + the GIN index). Service skeletons in src/lib/services/: - alerts.service.ts: fingerprintFor, reconcileAlertsForPort (upsert + auto-resolve), dismiss, acknowledge, listAlertsForPort. - alert-rules.ts: RULE_REGISTRY of 10 rule evaluators (currently no-op); PR2 fills in the bodies. - analytics.service.ts: readSnapshot/writeSnapshot with 15-min TTL + no-op compute* stubs for the four chart series; PR3 fills behavior. - expense-dedup.service.ts: scanForDuplicates + markBestDuplicate using the partial dedup index. PR8 wires the BullMQ trigger. - expense-ocr.service.ts: OcrResult/OcrLineItem types + ocrReceipt stub. PR9 wires Claude Vision (Haiku 4.5 + ephemeral system-prompt cache). - audit-search.service.ts: tsvector @@ plainto_tsquery + cursor pagination on (createdAt, id). PR10 wires the admin UI. tsc clean, lint clean, vitest 675/675 (one unrelated AES random-output flake passes solo). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:43:01 +02:00
},
{
"idx": 14,
"version": "7",
"when": 1777379952283,
"tag": "0014_black_banshee",
"breakpoints": true
feat(i18n): country/phone/timezone/subdivision primitives + form wiring Cross-cutting i18n polish for forms across the marina + residential + company domains. Introduces a single source of truth for country/phone/timezone/ subdivision data and replaces every nationality-as-free-text and timezone- as-string Input with a dedicated combobox. PR1 Countries — ALL_COUNTRY_CODES (~250 ISO-3166-1 alpha-2), Intl.DisplayNames for localized labels, detectDefaultCountry() with navigator-region fallback to US, CountryCombobox with regional-indicator flag glyphs + compact mode for inline use. PR2 Phone — libphonenumber-js wrapper (parsePhone / formatAsYouType / callingCodeFor), PhoneInput with flag dropdown + national-format AsYouType + paste-detect that flips the country dropdown for pasted international strings. PR3 Timezones — country->IANA map (250 entries, multi-zone for AU/BR/CA/CD/ ID/KZ/MN/MX/RU/US), formatTimezoneLabel ("Europe/London (UTC+1)"), TimezoneCombobox with Suggested/All grouping driven by countryHint. PR4 Subdivisions — wraps the iso-3166-2 npm package (~5000 ISO 3166-2 codes for every country), per-country cache, SubdivisionCombobox with "Pick a country first" / "No regions available" empty states. PR5 Schema deltas (migration 0015) — clients.nationality_iso, clientContacts {value_e164, value_country}, clientAddresses {country_iso, subdivision_iso}, residentialClients {phone_e164, phone_country, nationality_iso, timezone, place_of_residence_country_iso, subdivision_iso}, companies {incorporation_ country_iso, incorporation_subdivision_iso}, companyAddresses {country_iso, subdivision_iso}. Plus shared zod validators (validators/i18n.ts) used by every entity validator + route handler. PR6 ClientForm + ClientDetail — CountryCombobox replaces nationality Input, TimezoneCombobox replaces timezone Input (driven by nationalityIso hint), PhoneInput conditionally rendered for phone/whatsapp contacts. Inline editors (InlineCountryField / InlineTimezoneField / InlinePhoneField) for the detail-page overview rows + ContactsEditor. PR7 Residential client form + detail — phone -> PhoneInput, nationality/ timezone/place-of-residence-country/subdivision rows in both create sheet and inline-editable detail view. Subdivision wipes when country flips since codes are country-scoped. PR8 Company form + detail — incorporation country -> CountryCombobox, incorporation region -> SubdivisionCombobox in both modes. PR9 Public inquiry endpoint — accepts pre-normalized phoneE164/phoneCountry and i18n fields from newer website builds, server-side parsePhone() fallback for legacy raw-international submissions. Old Nuxt builds keep working unchanged. Tests: 4 unit suites for the primitives (25 tests), 1 integration spec for the public phone-normalization path (3 tests), 1 smoke spec asserting the combobox triggers render in all three create sheets. Test totals: vitest 713 -> 741 (+28). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 18:13:08 +02:00
},
{
"idx": 15,
"version": "7",
"when": 1777391373291,
"tag": "0015_i18n_columns",
"breakpoints": true
},
{
"idx": 16,
"version": "7",
"when": 1777395538988,
"tag": "0016_magical_spyke",
"breakpoints": true
},
{
"idx": 17,
"version": "7",
"when": 1777398450555,
"tag": "0017_tiny_mercury",
"breakpoints": true
feat(gdpr): staff-triggered client-data export bundle (Article 15) Adds a full GDPR Article 15 (right of access) workflow. Staff trigger an export from the client detail; a BullMQ worker assembles every row keyed to that client (profile, contacts, addresses, notes, tags, yachts, company memberships, interests, reservations, invoices, documents, last 500 audit events) into JSON + a self-contained HTML report, ZIPs them, uploads to MinIO, and optionally emails the client a 7-day signed download link. - New table gdpr_exports tracks lifecycle (pending → building → ready → sent / failed) with a 30-day cleanup target - Bundle builder (gdpr-bundle-builder.ts) — pure read-side, tenant- scoped, with HTML escaping to block injection from rogue field values - Worker hook in export queue dispatches on job name 'gdpr-export' - New audit actions: 'request_gdpr_export', 'send_gdpr_export' - API: POST/GET /api/v1/clients/:id/gdpr-export (admin-gated, exports rate-limit, Article-15 audit on POST); GET /:exportId returns a fresh signed URL - UI: <GdprExportButton> dialog on client detail header — admin-only, shows recent exports, supports email-to-client + override recipient, polls every 5s while open - Validation: refuses email-to-client when no primary email + no override (rather than silently dropping the send) Tests: 778/778 vitest (was 771) — +7 covering builder happy path, HTML escaping, tenant isolation, empty client, request-flow validation, and audit / queue interaction. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 20:06:31 +02:00
},
{
"idx": 18,
"version": "7",
"when": 1777399135032,
"tag": "0018_stormy_spencer_smythe",
"breakpoints": true
feat(sales): EOI queue route + invoice→deposit auto-advance + won/lost outcomes Three independent strengthenings of the sales spine that the prior coherence sweep made it possible to do cleanly. 1. EOI queue page - Sidebar entry under Documents → "EOI queue". - Route /[port]/documents/eoi renders DocumentsHub with the existing eoi_queue tab pre-selected (filters in-flight EOIs only). - .gitignore: tightened root-only `eoi/` ignore so the documents/eoi route is no longer silently excluded. 2. Invoice ↔ deposit link - invoices.interestId (FK, ON DELETE SET NULL) + invoices.kind ('general' | 'deposit'). Indexed on (port_id, interest_id). - createInvoiceSchema requires interestId when kind === 'deposit'; the service validates the linked interest belongs to the same port before insert. - recordPayment auto-advances pipelineStage to deposit_10pct (via advanceStageIfBehind) when a paid invoice is kind=deposit and has an interestId. No-op if the interest is already further along. - "Create deposit invoice" link added to the Deposit milestone on the interest detail. Links to /invoices/new?interestId=…&kind=deposit; the form prefills the billing entity from the linked interest's client and shows a context banner. 3. Won / lost terminal outcomes - interests.outcome ('won' | 'lost_other_marina' | 'lost_unqualified' | 'lost_no_response' | 'cancelled') + outcomeReason text + outcomeAt timestamp. Indexed on (port_id, outcome). - setInterestOutcome / clearInterestOutcome services + POST/DELETE /api/v1/interests/:id/outcome endpoints (gated by change_stage permission). Setting an outcome moves the interest to `completed` in the same write; clearing reopens to `in_communication` (or a caller-specified stage). - Mark Won / Mark Lost icon buttons on the interest detail header, plus an outcome badge that replaces the stage pill once a terminal outcome is set, plus a Reopen button. - Funnel + dashboard math updated to exclude lost/cancelled outcomes from active calculations (KPIs.activeInterests, pipelineValueUsd, getPipelineCounts, computePipelineFunnel, getRevenueForecast). The funnel now also returns a `lost` summary so callers can surface leakage without polluting conversion percentages. Schema changes shipped via 0019_lazy_vampiro.sql; applied to dev DB manually via psql because drizzle-kit push hits a pre-existing zod parsing issue on the companies index. Dev server may need a restart to flush prepared-statement caches. tsc clean. vitest 832/832 pass. ESLint clean on every file touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 00:01:33 +02:00
},
{
"idx": 19,
"version": "7",
"when": 1777671562738,
"tag": "0019_lazy_vampiro",
"breakpoints": true
feat(berths): full NocoDB field parity, numeric types, sales edit access Aligns the berths schema with the 117 production rows in NocoDB and exposes every field for editing via the BerthForm sheet. Schema (migration 0020): - power_capacity / voltage / nominal_boat_size / nominal_boat_size_m: text -> numeric (NocoDB stores plain numbers; text was wrong shape and broke filter/sort) - ADD status_override_mode text (1/117 legacy rows have a value; carried forward for parity but not yet wired into the UI) - USING NULLIF(TRIM(...), '')::numeric so legacy whitespace and empty strings convert cleanly Validator + service: - updateBerthSchema / createBerthSchema use z.coerce.number() for the four numeric fields - berths.service stringifies numeric values for Drizzle's numeric type Form (src/components/berths/berth-form.tsx): - adds: nominal boat size (ft/m), water depth (ft/m) + "is minimum" flag, side pontoon, cleat type/capacity, bollard type/capacity, bow facing - converts to typed selects (with NocoDB option lists in src/lib/constants): area, side pontoon, mooring type, cleat type/capacity, bollard type/capacity, access - power capacity / voltage become numeric inputs (with kW / V hints) Permissions (seed.ts + dev DB): - sales_manager and sales_agent: berths.edit false -> true ("sales will sometimes have to update these and I cannot be the only one") - super_admin / director already had it; viewer stays read-only - dev DB updated in-place via UPDATE roles ... jsonb_set Verification: - pnpm exec vitest run: 858/858 passing - pnpm exec tsc --noEmit: same 36 errors as baseline (all pre-existing on feat/mobile-foundation, none introduced) - lint clean Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 15:30:32 +02:00
},
{
"idx": 20,
"version": "7",
"when": 1777811835982,
"tag": "0020_unusual_azazel",
feat(berths): full NocoDB field parity, numeric types, sales edit access Aligns the berths schema with the 117 production rows in NocoDB and exposes every field for editing via the BerthForm sheet. Schema (migration 0020): - power_capacity / voltage / nominal_boat_size / nominal_boat_size_m: text -> numeric (NocoDB stores plain numbers; text was wrong shape and broke filter/sort) - ADD status_override_mode text (1/117 legacy rows have a value; carried forward for parity but not yet wired into the UI) - USING NULLIF(TRIM(...), '')::numeric so legacy whitespace and empty strings convert cleanly Validator + service: - updateBerthSchema / createBerthSchema use z.coerce.number() for the four numeric fields - berths.service stringifies numeric values for Drizzle's numeric type Form (src/components/berths/berth-form.tsx): - adds: nominal boat size (ft/m), water depth (ft/m) + "is minimum" flag, side pontoon, cleat type/capacity, bollard type/capacity, bow facing - converts to typed selects (with NocoDB option lists in src/lib/constants): area, side pontoon, mooring type, cleat type/capacity, bollard type/capacity, access - power capacity / voltage become numeric inputs (with kW / V hints) Permissions (seed.ts + dev DB): - sales_manager and sales_agent: berths.edit false -> true ("sales will sometimes have to update these and I cannot be the only one") - super_admin / director already had it; viewer stays read-only - dev DB updated in-place via UPDATE roles ... jsonb_set Verification: - pnpm exec vitest run: 858/858 passing - pnpm exec tsc --noEmit: same 36 errors as baseline (all pre-existing on feat/mobile-foundation, none introduced) - lint clean Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 15:30:32 +02:00
"breakpoints": true
},
feat(dedup): NocoDB migration script + tables (P3 dry-run) Lands the one-shot migration pipeline from the legacy NocoDB Interests base into the new client/interest schema. Dry-run mode is fully operational: pulls the live snapshot, runs the dedup library, and writes a CSV + Markdown report under .migration/<timestamp>/. The --apply phase is stubbed for a follow-up PR per the design's P3 implementation sequence. Schema additions ================ - `client_merge_candidates` — pairs flagged by the background scoring job for the /admin/duplicates review queue. Status enum: pending / dismissed / merged. Unique-(portId, clientAId, clientBId) so the same pair can't surface twice. Empty until P2 lands the cron. - `migration_source_links` — idempotency ledger. Maps source-system rows (NocoDB Interest #624 → new client UUID) so re-running --apply against the same dry-run report skips already-imported entities. Both tables ship with the migration `0020_unusual_azazel.sql` — already applied to the local dev DB during this commit's preparation. Library ======= src/lib/dedup/nocodb-source.ts Read-only adapter for the legacy NocoDB v2 API. xc-token auth, auto-paginates until isLastPage, captures the table IDs from the 2026-05-03 audit. `fetchSnapshot()` pulls every relevant table in parallel into one in-memory object the transform layer consumes. src/lib/dedup/migration-transform.ts Pure function: NocoDB snapshot in, MigrationPlan out. Per row: - normalizes name / email / phone / country via the dedup library - parses the legacy DD-MM-YYYY / DD/MM/YYYY / ISO date formats - maps the 8-stage `Sales Process Level` enum to the new 9-stage pipelineStage - filters yacht-name placeholders ('TBC', 'Na', etc.) - merges Internal Notes + Extra Comments + Berth Size Desired into a single notes blob Then runs `findClientMatches` pairwise (with blocking) and union-finds clusters of rows whose score crosses the auto-link threshold (90). Lower-scoring pairs (50–89) become 'needs review'. Each cluster's "lead" row is picked by completeness score with recency tie-break. src/lib/dedup/migration-report.ts Writes three artifacts to .migration/<timestamp>/: - report.csv — one row per planned op, RFC-4180 escaped - summary.md — human-skimmable overview - plan.json — full structured plan for the --apply phase CSV cells with comma / quote / newline are quoted; internal quotes are doubled. No external CSV dep. src/lib/dedup/phone-parse.ts Script-safe wrapper around libphonenumber-js's `core` entry that loads `metadata.min.json` directly. The default `index.cjs.js` bundled by libphonenumber hits a metadata-shape interop bug under Node 25 + tsx (`{ default }` wrapping); core+JSON sidesteps it. The dedup `normalizePhone` and `find-matches` both use this wrapper now so the same code path runs in vitest, Next.js, and the migration CLI without surprises. src/lib/dedup/normalize.ts Tightened country resolution: added Caribbean short-form aliases ('antigua' → AG, 'st kitts' → KN, etc.) and a city map covering the US locations seen in the NocoDB dump (Boston, Tampa, Fort Lauderdale, Port Jefferson, Nantucket). Also relaxed phone parsing to drop the `isValid()` strict check — the libphonenumber min build rejects many real NANP-territory numbers, and dedup only needs a canonical E.164 to compare. CLI === scripts/migrate-from-nocodb.ts pnpm tsx scripts/migrate-from-nocodb.ts --dry-run → Pulls the live NocoDB base (NOCODB_URL + NOCODB_TOKEN env vars), runs the transform, writes report. No DB writes. pnpm tsx scripts/migrate-from-nocodb.ts --apply --report .migration/<dir>/ → Stubbed; exits with `not yet implemented` and a pointer to the design doc. Apply phase ships in a follow-up. Tests ===== tests/unit/dedup/migration-transform.test.ts (7 cases) Fixture-based regression. A frozen 12-row NocoDB snapshot covers every duplicate pattern in the design (§1.2). The test asserts: - 12 input rows → 7 unique clients (cluster math is right) - Patterns A / B / C / E auto-link - Pattern F (Etiennette Clamouze) does NOT auto-link - Every interest preserved as its own row even when clients merge - 8-stage → 9-stage enum mapping is correct per spec - Multi-yacht merge (Constanzo CALYPSO + Costanzo GEMINI under one client) — the design's signature win - Output is deterministic (run twice, identical) Validation against real data ============================ Ran `pnpm tsx scripts/migrate-from-nocodb.ts --dry-run` against the live NocoDB. Result on 252 Interests rows: - 237 clients (15 merged into 13 clusters) - 252 interests (one per source row) - 406 contacts, 52 addresses - 13 auto-linked clusters (every confirmed cluster from §1.2 audit) - 3 pairs flagged for review (Camazou, Zasso, one new) - 1 phone placeholder flagged Total dedup test count: 57 (50 from P1 + 7 fixture tests). Lint: clean. Tsc: clean for new files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:50:01 +02:00
{
"idx": 21,
feat(dedup): NocoDB migration script + tables (P3 dry-run) Lands the one-shot migration pipeline from the legacy NocoDB Interests base into the new client/interest schema. Dry-run mode is fully operational: pulls the live snapshot, runs the dedup library, and writes a CSV + Markdown report under .migration/<timestamp>/. The --apply phase is stubbed for a follow-up PR per the design's P3 implementation sequence. Schema additions ================ - `client_merge_candidates` — pairs flagged by the background scoring job for the /admin/duplicates review queue. Status enum: pending / dismissed / merged. Unique-(portId, clientAId, clientBId) so the same pair can't surface twice. Empty until P2 lands the cron. - `migration_source_links` — idempotency ledger. Maps source-system rows (NocoDB Interest #624 → new client UUID) so re-running --apply against the same dry-run report skips already-imported entities. Both tables ship with the migration `0020_unusual_azazel.sql` — already applied to the local dev DB during this commit's preparation. Library ======= src/lib/dedup/nocodb-source.ts Read-only adapter for the legacy NocoDB v2 API. xc-token auth, auto-paginates until isLastPage, captures the table IDs from the 2026-05-03 audit. `fetchSnapshot()` pulls every relevant table in parallel into one in-memory object the transform layer consumes. src/lib/dedup/migration-transform.ts Pure function: NocoDB snapshot in, MigrationPlan out. Per row: - normalizes name / email / phone / country via the dedup library - parses the legacy DD-MM-YYYY / DD/MM/YYYY / ISO date formats - maps the 8-stage `Sales Process Level` enum to the new 9-stage pipelineStage - filters yacht-name placeholders ('TBC', 'Na', etc.) - merges Internal Notes + Extra Comments + Berth Size Desired into a single notes blob Then runs `findClientMatches` pairwise (with blocking) and union-finds clusters of rows whose score crosses the auto-link threshold (90). Lower-scoring pairs (50–89) become 'needs review'. Each cluster's "lead" row is picked by completeness score with recency tie-break. src/lib/dedup/migration-report.ts Writes three artifacts to .migration/<timestamp>/: - report.csv — one row per planned op, RFC-4180 escaped - summary.md — human-skimmable overview - plan.json — full structured plan for the --apply phase CSV cells with comma / quote / newline are quoted; internal quotes are doubled. No external CSV dep. src/lib/dedup/phone-parse.ts Script-safe wrapper around libphonenumber-js's `core` entry that loads `metadata.min.json` directly. The default `index.cjs.js` bundled by libphonenumber hits a metadata-shape interop bug under Node 25 + tsx (`{ default }` wrapping); core+JSON sidesteps it. The dedup `normalizePhone` and `find-matches` both use this wrapper now so the same code path runs in vitest, Next.js, and the migration CLI without surprises. src/lib/dedup/normalize.ts Tightened country resolution: added Caribbean short-form aliases ('antigua' → AG, 'st kitts' → KN, etc.) and a city map covering the US locations seen in the NocoDB dump (Boston, Tampa, Fort Lauderdale, Port Jefferson, Nantucket). Also relaxed phone parsing to drop the `isValid()` strict check — the libphonenumber min build rejects many real NANP-territory numbers, and dedup only needs a canonical E.164 to compare. CLI === scripts/migrate-from-nocodb.ts pnpm tsx scripts/migrate-from-nocodb.ts --dry-run → Pulls the live NocoDB base (NOCODB_URL + NOCODB_TOKEN env vars), runs the transform, writes report. No DB writes. pnpm tsx scripts/migrate-from-nocodb.ts --apply --report .migration/<dir>/ → Stubbed; exits with `not yet implemented` and a pointer to the design doc. Apply phase ships in a follow-up. Tests ===== tests/unit/dedup/migration-transform.test.ts (7 cases) Fixture-based regression. A frozen 12-row NocoDB snapshot covers every duplicate pattern in the design (§1.2). The test asserts: - 12 input rows → 7 unique clients (cluster math is right) - Patterns A / B / C / E auto-link - Pattern F (Etiennette Clamouze) does NOT auto-link - Every interest preserved as its own row even when clients merge - 8-stage → 9-stage enum mapping is correct per spec - Multi-yacht merge (Constanzo CALYPSO + Costanzo GEMINI under one client) — the design's signature win - Output is deterministic (run twice, identical) Validation against real data ============================ Ran `pnpm tsx scripts/migrate-from-nocodb.ts --dry-run` against the live NocoDB. Result on 252 Interests rows: - 237 clients (15 merged into 13 clusters) - 252 interests (one per source row) - 406 contacts, 52 addresses - 13 auto-linked clusters (every confirmed cluster from §1.2 audit) - 3 pairs flagged for review (Camazou, Zasso, one new) - 1 phone placeholder flagged Total dedup test count: 57 (50 from P1 + 7 fixture tests). Lint: clean. Tsc: clean for new files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:50:01 +02:00
"version": "7",
"when": 1777812671833,
"tag": "0021_magenta_madame_hydra",
feat(dedup): NocoDB migration script + tables (P3 dry-run) Lands the one-shot migration pipeline from the legacy NocoDB Interests base into the new client/interest schema. Dry-run mode is fully operational: pulls the live snapshot, runs the dedup library, and writes a CSV + Markdown report under .migration/<timestamp>/. The --apply phase is stubbed for a follow-up PR per the design's P3 implementation sequence. Schema additions ================ - `client_merge_candidates` — pairs flagged by the background scoring job for the /admin/duplicates review queue. Status enum: pending / dismissed / merged. Unique-(portId, clientAId, clientBId) so the same pair can't surface twice. Empty until P2 lands the cron. - `migration_source_links` — idempotency ledger. Maps source-system rows (NocoDB Interest #624 → new client UUID) so re-running --apply against the same dry-run report skips already-imported entities. Both tables ship with the migration `0020_unusual_azazel.sql` — already applied to the local dev DB during this commit's preparation. Library ======= src/lib/dedup/nocodb-source.ts Read-only adapter for the legacy NocoDB v2 API. xc-token auth, auto-paginates until isLastPage, captures the table IDs from the 2026-05-03 audit. `fetchSnapshot()` pulls every relevant table in parallel into one in-memory object the transform layer consumes. src/lib/dedup/migration-transform.ts Pure function: NocoDB snapshot in, MigrationPlan out. Per row: - normalizes name / email / phone / country via the dedup library - parses the legacy DD-MM-YYYY / DD/MM/YYYY / ISO date formats - maps the 8-stage `Sales Process Level` enum to the new 9-stage pipelineStage - filters yacht-name placeholders ('TBC', 'Na', etc.) - merges Internal Notes + Extra Comments + Berth Size Desired into a single notes blob Then runs `findClientMatches` pairwise (with blocking) and union-finds clusters of rows whose score crosses the auto-link threshold (90). Lower-scoring pairs (50–89) become 'needs review'. Each cluster's "lead" row is picked by completeness score with recency tie-break. src/lib/dedup/migration-report.ts Writes three artifacts to .migration/<timestamp>/: - report.csv — one row per planned op, RFC-4180 escaped - summary.md — human-skimmable overview - plan.json — full structured plan for the --apply phase CSV cells with comma / quote / newline are quoted; internal quotes are doubled. No external CSV dep. src/lib/dedup/phone-parse.ts Script-safe wrapper around libphonenumber-js's `core` entry that loads `metadata.min.json` directly. The default `index.cjs.js` bundled by libphonenumber hits a metadata-shape interop bug under Node 25 + tsx (`{ default }` wrapping); core+JSON sidesteps it. The dedup `normalizePhone` and `find-matches` both use this wrapper now so the same code path runs in vitest, Next.js, and the migration CLI without surprises. src/lib/dedup/normalize.ts Tightened country resolution: added Caribbean short-form aliases ('antigua' → AG, 'st kitts' → KN, etc.) and a city map covering the US locations seen in the NocoDB dump (Boston, Tampa, Fort Lauderdale, Port Jefferson, Nantucket). Also relaxed phone parsing to drop the `isValid()` strict check — the libphonenumber min build rejects many real NANP-territory numbers, and dedup only needs a canonical E.164 to compare. CLI === scripts/migrate-from-nocodb.ts pnpm tsx scripts/migrate-from-nocodb.ts --dry-run → Pulls the live NocoDB base (NOCODB_URL + NOCODB_TOKEN env vars), runs the transform, writes report. No DB writes. pnpm tsx scripts/migrate-from-nocodb.ts --apply --report .migration/<dir>/ → Stubbed; exits with `not yet implemented` and a pointer to the design doc. Apply phase ships in a follow-up. Tests ===== tests/unit/dedup/migration-transform.test.ts (7 cases) Fixture-based regression. A frozen 12-row NocoDB snapshot covers every duplicate pattern in the design (§1.2). The test asserts: - 12 input rows → 7 unique clients (cluster math is right) - Patterns A / B / C / E auto-link - Pattern F (Etiennette Clamouze) does NOT auto-link - Every interest preserved as its own row even when clients merge - 8-stage → 9-stage enum mapping is correct per spec - Multi-yacht merge (Constanzo CALYPSO + Costanzo GEMINI under one client) — the design's signature win - Output is deterministic (run twice, identical) Validation against real data ============================ Ran `pnpm tsx scripts/migrate-from-nocodb.ts --dry-run` against the live NocoDB. Result on 252 Interests rows: - 237 clients (15 merged into 13 clusters) - 252 interests (one per source row) - 406 contacts, 52 addresses - 13 auto-linked clusters (every confirmed cluster from §1.2 audit) - 3 pairs flagged for review (Camazou, Zasso, one new) - 1 phone placeholder flagged Total dedup test count: 57 (50 from P1 + 7 fixture tests). Lint: clean. Tsc: clean for new files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:50:01 +02:00
"breakpoints": true
feat(dedup): runtime surfaces — merge service, at-create suggestion, admin queue (P2) Adds the live dedup pipeline on top of the P1 library + P3 migration script. The new `client/interest` model now actively prevents duplicate client records at creation time and gives admins a queue to triage the borderline pairs the at-create check missed. Three layers, per design §7: Layer 1 — At-create suggestion ============================== `GET /api/v1/clients/match-candidates` Accepts free-text email / phone / name from the in-flight client form, normalizes them via the dedup library, and returns scored matches against the port's live client pool. Filters out low-confidence noise (the background scoring queue picks those up separately). Strict port scoping; never leaks across tenants. `<DedupSuggestionPanel>` (`src/components/clients/dedup-suggestion-panel.tsx`) Debounced React Query hook. Renders nothing for short inputs or no useful match. On a high-confidence match it interrupts visually with an amber-tinted card and a "Use this client" primary button. Medium confidence falls back to a softer "possible match — check before creating" treatment. `<ClientForm>` Renders the panel above the form (create path only — skipped on edit). New `onUseExistingClient` callback fires when the user picks the existing client; the form closes and the parent decides what to do (typically: navigate to that client's detail page or open the create-interest dialog pre-filled). Layer 2 — Merge service ======================= `mergeClients` (`src/lib/services/client-merge.service.ts`) The atomic merge primitive that everything else calls. Single transaction. Per §6 of the design: - Locks both rows (FOR UPDATE) so concurrent merges of the same loser fail with a clear error rather than racing. - Snapshots the full loser state (contacts / addresses / notes / tags / interest+reservation IDs / relationship rows) into the `client_merge_log.merge_details` JSONB column for the eventual undo flow. - Reattaches every loser-side row to the winner: interests, reservations, contacts (skipping duplicates by `(channel, value)`), addresses, notes, tags (deduped), relationships. - Optional `fieldChoices` — per-scalar overrides letting the user keep the loser's value for fullName / nationality / preferences / timezone / source. - Marks the loser archived with `mergedIntoClientId` set (a redirect pointer for stragglers; never hard-deleted within the undo window). - Resolves any matching `client_merge_candidates` row to status='merged'. - Writes audit log entry. Schema additions: - `clients.merged_into_client_id` (nullable text, indexed) — the redirect pointer set on archive. Tests: 6 cases against a real DB — happy path moves rows + writes log; self-merge / cross-port / already-merged refused; duplicate-contact deduped on reattach; fieldChoices copies loser values to winner. Layer 3 — Admin review queue ============================ `GET /api/v1/admin/duplicates` Pending merge candidates (status='pending') for the current port, with both client summaries hydrated for side-by-side rendering. Skips pairs where one side is already archived/merged. `POST /api/v1/admin/duplicates/[id]/merge` Confirms a candidate. Body picks the winner; the other side becomes the loser. Calls into `mergeClients` — the only path that writes `client_merge_log`. `POST /api/v1/admin/duplicates/[id]/dismiss` Marks the candidate dismissed. Future scoring runs skip the same pair until a score change recreates the row. `<DuplicatesReviewQueue>` (`/admin/duplicates`) Side-by-side card UI for each pending pair. Click a card to pick the winner; the other side is automatically the loser. Toolbar: "Merge into selected" + "Dismiss". No per-field merge editor in this PR — that's a future polish; the simple "pick the better row" flow handles ~80% of cases. Test coverage ============= 11 new integration tests (76 added in this branch total): - 6 mergeClients (atomicity, refusal cases, contact dedup, fieldChoices) - 5 match-candidates API (shape, port scoping, confidence tiers, Pattern F false-positive guard) Full vitest: 926/926 passing (was 858 before the dedup branch). Lint: clean. tsc: clean for new files (only pre-existing errors in unrelated `tests/integration/` files remain, same as before this PR). Out of scope, deferred ====================== - Background scoring cron that populates `client_merge_candidates` (the queue is empty until this lands; manual seeding works for now via the at-create flow). - Side-by-side per-field merge editor with checkboxes (the simple "pick the winner" UX shipped here covers ~80% of real cases). - Admin settings UI for tuning the dedup thresholds. Defaults from the design (90 / 50) are baked in for now. - `unmergeClients` (the snapshot is captured in client_merge_log; the undo endpoint just hasn't been wired yet). These are all natural follow-up PRs that don't block shipping the runtime UX. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:59:04 +02:00
},
{
"idx": 22,
feat(dedup): runtime surfaces — merge service, at-create suggestion, admin queue (P2) Adds the live dedup pipeline on top of the P1 library + P3 migration script. The new `client/interest` model now actively prevents duplicate client records at creation time and gives admins a queue to triage the borderline pairs the at-create check missed. Three layers, per design §7: Layer 1 — At-create suggestion ============================== `GET /api/v1/clients/match-candidates` Accepts free-text email / phone / name from the in-flight client form, normalizes them via the dedup library, and returns scored matches against the port's live client pool. Filters out low-confidence noise (the background scoring queue picks those up separately). Strict port scoping; never leaks across tenants. `<DedupSuggestionPanel>` (`src/components/clients/dedup-suggestion-panel.tsx`) Debounced React Query hook. Renders nothing for short inputs or no useful match. On a high-confidence match it interrupts visually with an amber-tinted card and a "Use this client" primary button. Medium confidence falls back to a softer "possible match — check before creating" treatment. `<ClientForm>` Renders the panel above the form (create path only — skipped on edit). New `onUseExistingClient` callback fires when the user picks the existing client; the form closes and the parent decides what to do (typically: navigate to that client's detail page or open the create-interest dialog pre-filled). Layer 2 — Merge service ======================= `mergeClients` (`src/lib/services/client-merge.service.ts`) The atomic merge primitive that everything else calls. Single transaction. Per §6 of the design: - Locks both rows (FOR UPDATE) so concurrent merges of the same loser fail with a clear error rather than racing. - Snapshots the full loser state (contacts / addresses / notes / tags / interest+reservation IDs / relationship rows) into the `client_merge_log.merge_details` JSONB column for the eventual undo flow. - Reattaches every loser-side row to the winner: interests, reservations, contacts (skipping duplicates by `(channel, value)`), addresses, notes, tags (deduped), relationships. - Optional `fieldChoices` — per-scalar overrides letting the user keep the loser's value for fullName / nationality / preferences / timezone / source. - Marks the loser archived with `mergedIntoClientId` set (a redirect pointer for stragglers; never hard-deleted within the undo window). - Resolves any matching `client_merge_candidates` row to status='merged'. - Writes audit log entry. Schema additions: - `clients.merged_into_client_id` (nullable text, indexed) — the redirect pointer set on archive. Tests: 6 cases against a real DB — happy path moves rows + writes log; self-merge / cross-port / already-merged refused; duplicate-contact deduped on reattach; fieldChoices copies loser values to winner. Layer 3 — Admin review queue ============================ `GET /api/v1/admin/duplicates` Pending merge candidates (status='pending') for the current port, with both client summaries hydrated for side-by-side rendering. Skips pairs where one side is already archived/merged. `POST /api/v1/admin/duplicates/[id]/merge` Confirms a candidate. Body picks the winner; the other side becomes the loser. Calls into `mergeClients` — the only path that writes `client_merge_log`. `POST /api/v1/admin/duplicates/[id]/dismiss` Marks the candidate dismissed. Future scoring runs skip the same pair until a score change recreates the row. `<DuplicatesReviewQueue>` (`/admin/duplicates`) Side-by-side card UI for each pending pair. Click a card to pick the winner; the other side is automatically the loser. Toolbar: "Merge into selected" + "Dismiss". No per-field merge editor in this PR — that's a future polish; the simple "pick the better row" flow handles ~80% of cases. Test coverage ============= 11 new integration tests (76 added in this branch total): - 6 mergeClients (atomicity, refusal cases, contact dedup, fieldChoices) - 5 match-candidates API (shape, port scoping, confidence tiers, Pattern F false-positive guard) Full vitest: 926/926 passing (was 858 before the dedup branch). Lint: clean. tsc: clean for new files (only pre-existing errors in unrelated `tests/integration/` files remain, same as before this PR). Out of scope, deferred ====================== - Background scoring cron that populates `client_merge_candidates` (the queue is empty until this lands; manual seeding works for now via the at-create flow). - Side-by-side per-field merge editor with checkboxes (the simple "pick the winner" UX shipped here covers ~80% of real cases). - Admin settings UI for tuning the dedup thresholds. Defaults from the design (90 / 50) are baked in for now. - `unmergeClients` (the snapshot is captured in client_merge_log; the undo endpoint just hasn't been wired yet). These are all natural follow-up PRs that don't block shipping the runtime UX. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:59:04 +02:00
"version": "7",
"when": 1777814682110,
"tag": "0022_medical_betty_brant",
"breakpoints": true
},
{
"idx": 23,
"version": "7",
"when": 1777927586934,
"tag": "0023_omniscient_reaper",
feat(dedup): runtime surfaces — merge service, at-create suggestion, admin queue (P2) Adds the live dedup pipeline on top of the P1 library + P3 migration script. The new `client/interest` model now actively prevents duplicate client records at creation time and gives admins a queue to triage the borderline pairs the at-create check missed. Three layers, per design §7: Layer 1 — At-create suggestion ============================== `GET /api/v1/clients/match-candidates` Accepts free-text email / phone / name from the in-flight client form, normalizes them via the dedup library, and returns scored matches against the port's live client pool. Filters out low-confidence noise (the background scoring queue picks those up separately). Strict port scoping; never leaks across tenants. `<DedupSuggestionPanel>` (`src/components/clients/dedup-suggestion-panel.tsx`) Debounced React Query hook. Renders nothing for short inputs or no useful match. On a high-confidence match it interrupts visually with an amber-tinted card and a "Use this client" primary button. Medium confidence falls back to a softer "possible match — check before creating" treatment. `<ClientForm>` Renders the panel above the form (create path only — skipped on edit). New `onUseExistingClient` callback fires when the user picks the existing client; the form closes and the parent decides what to do (typically: navigate to that client's detail page or open the create-interest dialog pre-filled). Layer 2 — Merge service ======================= `mergeClients` (`src/lib/services/client-merge.service.ts`) The atomic merge primitive that everything else calls. Single transaction. Per §6 of the design: - Locks both rows (FOR UPDATE) so concurrent merges of the same loser fail with a clear error rather than racing. - Snapshots the full loser state (contacts / addresses / notes / tags / interest+reservation IDs / relationship rows) into the `client_merge_log.merge_details` JSONB column for the eventual undo flow. - Reattaches every loser-side row to the winner: interests, reservations, contacts (skipping duplicates by `(channel, value)`), addresses, notes, tags (deduped), relationships. - Optional `fieldChoices` — per-scalar overrides letting the user keep the loser's value for fullName / nationality / preferences / timezone / source. - Marks the loser archived with `mergedIntoClientId` set (a redirect pointer for stragglers; never hard-deleted within the undo window). - Resolves any matching `client_merge_candidates` row to status='merged'. - Writes audit log entry. Schema additions: - `clients.merged_into_client_id` (nullable text, indexed) — the redirect pointer set on archive. Tests: 6 cases against a real DB — happy path moves rows + writes log; self-merge / cross-port / already-merged refused; duplicate-contact deduped on reattach; fieldChoices copies loser values to winner. Layer 3 — Admin review queue ============================ `GET /api/v1/admin/duplicates` Pending merge candidates (status='pending') for the current port, with both client summaries hydrated for side-by-side rendering. Skips pairs where one side is already archived/merged. `POST /api/v1/admin/duplicates/[id]/merge` Confirms a candidate. Body picks the winner; the other side becomes the loser. Calls into `mergeClients` — the only path that writes `client_merge_log`. `POST /api/v1/admin/duplicates/[id]/dismiss` Marks the candidate dismissed. Future scoring runs skip the same pair until a score change recreates the row. `<DuplicatesReviewQueue>` (`/admin/duplicates`) Side-by-side card UI for each pending pair. Click a card to pick the winner; the other side is automatically the loser. Toolbar: "Merge into selected" + "Dismiss". No per-field merge editor in this PR — that's a future polish; the simple "pick the better row" flow handles ~80% of cases. Test coverage ============= 11 new integration tests (76 added in this branch total): - 6 mergeClients (atomicity, refusal cases, contact dedup, fieldChoices) - 5 match-candidates API (shape, port scoping, confidence tiers, Pattern F false-positive guard) Full vitest: 926/926 passing (was 858 before the dedup branch). Lint: clean. tsc: clean for new files (only pre-existing errors in unrelated `tests/integration/` files remain, same as before this PR). Out of scope, deferred ====================== - Background scoring cron that populates `client_merge_candidates` (the queue is empty until this lands; manual seeding works for now via the at-create flow). - Side-by-side per-field merge editor with checkboxes (the simple "pick the winner" UX shipped here covers ~80% of real cases). - Admin settings UI for tuning the dedup thresholds. Defaults from the design (90 / 50) are baked in for now. - `unmergeClients` (the snapshot is captured in client_merge_log; the undo endpoint just hasn't been wired yet). These are all natural follow-up PRs that don't block shipping the runtime UX. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 14:59:04 +02:00
"breakpoints": true
},
{
"idx": 24,
"version": "7",
"when": 1777938954111,
"tag": "0024_normalize_mooring_numbers",
"breakpoints": true
},
{
"idx": 25,
"version": "7",
"when": 1777939212954,
"tag": "0025_berth_pricing_columns",
"breakpoints": true
},
{
"idx": 26,
"version": "7",
"when": 1777939906731,
"tag": "0026_client_contacts_one_primary_per_channel",
"breakpoints": true
},
{
"idx": 27,
"version": "7",
"when": 1777939914252,
"tag": "0027_backfill_nationality_iso_from_phone",
"breakpoints": true
feat(db): m:m interest_berths junction + role flags Introduces the multi-berth interest model from plan §3.1: a junction between interests and berths with three role flags so the same berth can be linked as the primary deal target, an EOI-bundle inclusion, or a "just exploring" link without conflating semantics. - 0028 schema migration creates interest_berths with the unique partial index "≤1 primary per interest", a unique compound on (interest_id, berth_id), and indexes for the public-map "under offer" lookup (where is_specific_interest=true). - Same migration adds desired_length_ft / desired_width_ft / desired_draft_ft to interests for the recommender. - Same migration runs the Phase 2 data migration: every interest with a non-null berth_id gets one junction row marked is_primary=true, is_specific_interest=true, and is_in_eoi_bundle = (eoi_status='signed'). Pre-flight check halts on dangling FKs (§14.3 critical case). - New service src/lib/services/interest-berths.service.ts owns reads + writes of the junction. getPrimaryBerth / getPrimaryBerthsForInterests feed list pages; upsertInterestBerth demotes the prior primary in the same transaction so the unique index is never violated. - interests.berth_id stays in place this commit so existing callers keep working; Phase 2b migrates them onto the helper service and a later migration drops the column. 53 dev rows seeded into the junction; tests still green at 996. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:22:11 +02:00
},
{
"idx": 28,
"version": "7",
"when": 1777940421236,
"tag": "0028_interest_berths_junction",
"breakpoints": true
refactor(interests): migrate callers to interest_berths junction + drop berth_id Phase 2b of the berth-recommender refactor (plan §3.4). Every caller of the legacy `interests.berth_id` column now reads / writes through the `interest_berths` junction via the helper service introduced in Phase 2a; the column itself is dropped in a final migration. Service-layer changes - interests.service: filter `?berthId=X` becomes EXISTS-against-junction; list enrichment uses `getPrimaryBerthsForInterests`; create/update/ linkBerth/unlinkBerth all dispatch through the junction helpers, with createInterest's row insert + junction write sharing a single transaction. - clients / dashboard / report-generators / search: leftJoin chains pivot through `interest_berths` filtered by `is_primary=true`. - eoi-context / document-templates / berth-rules-engine / portal / record-export / queue worker: read primary via `getPrimaryBerth(...)`. - interest-scoring: berthLinked is now derived from any junction row count. - dedup/migration-apply + public interest route: write a primary junction row alongside the interest insert when a berth is provided. API contract preserved: list/detail responses still emit `berthId` and `berthMooringNumber`, derived from the primary junction row, so frontend consumers (interest-form, interest-detail-header) need no changes. Schema + migration - Drop `interestsRelations.berth` and `idx_interests_berth`. - Replace `berthsRelations.interests` with `interestBerths`. - Migration 0029_puzzling_romulus drops `interests.berth_id` + the index. - Tests that previously inserted `interests.berthId` now seed a primary junction row alongside the interest. Verified: vitest 995 passing (1 unrelated pre-existing flake in maintenance-cleanup.test.ts), tsc clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:41:52 +02:00
},
{
"idx": 29,
"version": "7",
"when": 1777941465866,
"tag": "0029_puzzling_romulus",
"breakpoints": true
feat(berths): per-berth PDF storage (versioned) + reverse parser Phase 6b of the berth-recommender refactor (see docs/berth-recommender-and-pdf-plan.md §3.2, §3.3, §4.7b, §11.1, §14.6). Builds on the Phase 6a pluggable storage backend (commit 83693dd) — every file write goes through `getStorageBackend()`; no direct minio imports. Schema (migration 0030_berth_pdf_versions): - new table `berth_pdf_versions` with monotonic `version_number` per berth, `storage_key` (renamed convention from §4.7a), sha256, size, `download_url_expires_at` cache slot for §11.1 signed-URL throttling, and `parse_results` jsonb for the audit trail. - new column `berths.current_pdf_version_id` (deferred from Phase 0) with FK to `berth_pdf_versions(id)` ON DELETE SET NULL. - relations + types exported from `schema/berths.ts`. 3-tier reverse parser (`lib/services/berth-pdf-parser.ts`): 1. AcroForm via pdf-lib — pulls named fields (`length_ft`, `mooring_number`, etc.) at confidence 1. Sample PDF has 0 such fields, so this is defensive coverage for future templates. 2. OCR via Tesseract.js — positional/regex heuristics keyed off the §9.2 layout (Length/Width/Water Depth as `<imperial> / <metric>`, `WEEK HIGH / LOW`, `CONFIRMED THROUGH UNTIL <date>`, etc.). Returns per-field confidence + global mean; flags imperial-vs-metric drift >1% in `warnings`. 3. AI fallback — gated via `getResolvedOcrConfig()` (existing openai/claude provider). Surfaced from the diff dialog only when `shouldOfferAiTier()` returns true (mean OCR confidence below 0.55 threshold), so OPENAI_API_KEY isn't burned on every upload. Service layer (`lib/services/berth-pdf.service.ts`): - `uploadBerthPdf()` — magic-byte check, size cap, version-number bump + current pointer in one transaction. - `reconcilePdfWithBerth()` — auto-applies fields where CRM is null; flags conflicts when CRM and PDF disagree; tolerates ±1% on numeric columns; warns on mooring-number-in-PDF mismatch (§14.6). - `applyParseResults()` — hard allowlist of writable columns; stamps `appliedFields` onto `parse_results` for audit. - `rollbackToVersion()` — pointer flip only, never re-parses (§14.6). - `listBerthPdfVersions()` — version list with 15-min signed URLs. - `getMaxUploadMb()` — port-override → global → default 15 lookup on `system_settings.berth_pdf_max_upload_mb`. §14.6 critical mitigations: - Magic-byte check (`%PDF-`) on every upload; mismatch deletes the storage object and rejects the request. - Size cap from `system_settings.berth_pdf_max_upload_mb` (default 15 MB); enforced in the upload-url presign AND server-side. - 0-byte uploads rejected. - Mooring-number mismatch surfaces as a `warnings[]` entry on the reconcile result so the rep sees it in the diff dialog. - Imperial vs metric ±1% tolerance in both the parser warnings and the reconcile equality check. - Path traversal already blocked at the storage layer (Phase 6a). API + UI: - `POST /api/v1/berths/[id]/pdf-upload-url` — presigned URL (S3) or HMAC-signed proxy URL (filesystem) sized to the per-port cap. - `POST /api/v1/berths/[id]/pdf-versions` — verifies the upload via `backend.head()`, writes the row, bumps `current_pdf_version_id`. - `GET /api/v1/berths/[id]/pdf-versions` — version list + signed URLs. - `POST /api/v1/berths/[id]/pdf-versions/[versionId]/rollback`. - `POST /api/v1/berths/[id]/pdf-versions/parse-results/apply` — rep-confirmed diff payload. - New "Documents" tab on the berth detail page (`berth-tabs.tsx`) with current-PDF panel, version history, Replace PDF button, and `<PdfReconcileDialog>` for the auto-applied + conflicts UX. System settings: - `berth_pdf_max_upload_mb` (default 15) — caps presigned-upload size + server-side validation. Resolved port-override → global → default. Tests: - `tests/unit/services/berth-pdf-parser.test.ts` — magic bytes, feet-inches, human dates, full §9.2-shaped OCR text → 18 fields, drift warning, AI-tier gate. - `tests/unit/services/berth-pdf-acroform.test.ts` — synthetic pdf-lib AcroForm round-trip. - `tests/integration/berth-pdf-versions.test.ts` — upload, version- number bump, magic-byte rejection, reconcile auto-applied vs conflicts vs ±1% tolerance, mooring-number warning, applyParseResults allowlist enforcement, rollback semantics. Acceptance: `pnpm exec tsc --noEmit` clean, `pnpm exec vitest run` green at 1103/1103. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:34:24 +02:00
},
{
"idx": 30,
"version": "7",
"when": 1777944021221,
"tag": "0030_berth_pdf_versions",
"breakpoints": true
feat(emails): sales send-out flows + brochures + email-from settings Phase 7 of the berth-recommender refactor (plan §3.3, §4.8, §4.9, §5.7, §5.8, §5.9, §11.1, §14.7, §14.9). Adds the rep-driven send-out path for per-berth PDFs and port-wide brochures, the per-port sales SMTP/IMAP config + body templates, and the supporting admin UI. Migration: 0031_brochures_and_document_sends.sql Schema additions: - brochures (port-wide, with isDefault marker + archive) - brochure_versions (versioned uploads, storageKey per §4.7a) - document_sends (audit log of every rep-initiated send; failures captured with failedAt + errorReason). berthPdfVersionId is a plain text column (no FK) — loose-coupled to Phase 6b's berth_pdf_versions so the two phases stay independent. §14.7 critical mitigations: - Body XSS: rep-authored markdown goes through renderEmailBody() (HTML-escape first, then a tight allowlist of bold/italic/code/link rules). https:// + mailto: only — javascript:/data: URLs stripped. Tested against script/img/iframe/svg/onerror polyglots. - Recipient typo: strict email regex + two-step confirm modal that shows the exact recipient before send. - Unresolved merge fields: pre-send dry-run /preview endpoint blocks submission until findUnresolvedTokens() returns empty. - SMTP failure: every transport rejection writes a document_sends row with failedAt + errorReason; UI surfaces the message. - Hourly per-user rate limit: 50 sends/user/hour via existing checkRateLimit(). - Size threshold fallback (§11.1): files above email_attach_threshold_mb (default 15) ship as a 24h signed-URL download link in the body instead of an attachment. Storage stream flows directly to nodemailer to avoid buffering 20MB+. §14.10 critical mitigation: - SMTP/IMAP passwords encrypted at rest via the existing EMAIL_CREDENTIAL_KEY (AES-256-GCM). The /api/v1/admin/email/ sales-config GET endpoint never returns the decrypted value — only a *PassIsSet boolean. PATCH treats empty string as "leave unchanged" and explicit null as "clear", so the masked-placeholder UI round- trips without forcing re-entry on every save. system_settings keys (per-port unless noted): - sales_from_address, sales_smtp_{host,port,secure,user,pass_encrypted} - sales_imap_{host,port,user,pass_encrypted} - sales_auth_method (default app_password) - noreply_from_address - email_template_send_berth_pdf_body, email_template_send_brochure_body - brochure_max_upload_mb (default 50) - email_attach_threshold_mb (default 15) UI surfaces (per §5.7, §5.8, §5.9): - <SendDocumentDialog> shared 2-step compose+confirm flow. - <SendBerthPdfDialog>, <SendDocumentsDialog>, <SendFromInterestButton> wrappers per detail page. - /[portSlug]/admin/brochures: list, upload (direct-to-storage presigned PUT for the 20MB+ files per §11.1), default toggle, archive. - /[portSlug]/admin/email extended with <SalesEmailConfigCard>: SMTP + IMAP creds, body templates, threshold/max settings. Storage: every upload + download goes through getStorageBackend() — no direct minio imports, per Phase 6a contract. Tests: 1145 vitest passing (+ 50 new in markdown-email-sanitization.test.ts, document-sends-validators.test.ts, sales-email-config-validators.test.ts). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:38:47 +02:00
},
{
"idx": 31,
"version": "7",
"when": 1777944191753,
"tag": "0031_brochures_and_document_sends",
"breakpoints": true
}
]
}