Files
pn-new-crm/src/lib/dedup/migration-apply.ts

578 lines
19 KiB
TypeScript
Raw Normal View History

feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
/**
* Apply phase for the legacy NocoDB CRM migration. Walks a
* `MigrationPlan` produced by {@link transformSnapshot} and writes
* the new client / contact / address / yacht / interest rows into the
* target port.
*
* Idempotent: every insert is guarded by a `migration_source_links`
* lookup keyed on `(source_system, source_id, target_entity_type)`, so
* a partial failure can be resumed by re-running the script. Re-runs
* against an already-applied plan are a near-no-op.
*
* Per-entity transactions (not one giant transaction) - the design
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
* favours visible partial progress on failure over all-or-nothing.
*
* @see src/lib/dedup/migration-transform.ts for the input shape.
* @see src/lib/db/schema/migration.ts for the idempotency ledger.
*/
import { and, eq, inArray } from 'drizzle-orm';
import { db } from '@/lib/db';
import { clients, clientContacts, clientAddresses } from '@/lib/db/schema/clients';
import { interests } from '@/lib/db/schema/interests';
import { yachts } from '@/lib/db/schema/yachts';
import { berths } from '@/lib/db/schema/berths';
import { documents, documentSigners } from '@/lib/db/schema/documents';
import { residentialClients, residentialInterests } from '@/lib/db/schema/residential';
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
import { migrationSourceLinks } from '@/lib/db/schema/migration';
import type {
MigrationPlan,
PlannedClient,
PlannedDocument,
PlannedInterest,
PlannedResidentialClient,
} from './migration-transform';
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
const SOURCE_SYSTEM = 'nocodb_interests';
fix(migration): legacy bare-mooring lookup + port-nimara berth backfill Two issues surfaced when applying the migration to dev: 1. Mooring number format mismatch The legacy NocoDB Interests table writes bare mooring strings ("D32", "B16", "A4"), but the new berths table (mirroring the NocoDB Berths snapshot) uses zero-padded dashed form ("D-32", "B-16", "A-04"). The interest→berth lookup missed every reference. migration-apply.ts now tries the literal value first, then falls back to a normalized form via `normalizeLegacyMooring(raw)`: "D32" -> "D-32" "A4" -> "A-04" "E18" -> "E-18" Multi-mooring strings ("A3, D30") are left as-is so they surface in the warnings list for human review rather than silently picking one. 2. port-nimara only had the 12 hand-rolled seed berths, not the 117- berth NocoDB snapshot The mobile-foundation seed only places those 12 in port-nimara; the 117-berth snapshot was added later but only seeded into Marina Azzurra (the secondary test port). Migrated interests reference moorings well beyond A-01..D-03, so most lookups failed. New scripts/load-berths-to-port-nimara.ts: idempotently loads any missing snapshot berths into port-nimara without disturbing the existing 12 (skips moorings that already exist). Run once; subsequent runs no-op. Result of full migration run on dev: 237 clients inserted (out of 245 total — 8 from prior seed) 406 contacts, 52 addresses, 38 yachts, 252 interests 27 interest→berth links resolved (only 13 source rows had a Berth field set in NocoDB to begin with — most legacy interests are early inquiries with no berth assignment) 1 unresolved warning: source=277 has multi-mooring "A3, D30" Verified in UI: /port-nimara/clients shows real names (John-michael Seelye, Reza Amjad, Etiennette Clamouze, …) /port-nimara/clients/<id> renders contacts (gmail.com addresses, E.164 phones), tab counts (Interests N, Yachts N), pipeline summary Dashboard: 245 clients, 266 active interests, $46.5M pipeline value Pipeline funnel chart now shows real distribution (180 Open, 45 EOI Signed, dropoff through stages) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 21:05:11 +02:00
/**
* Convert a legacy bare mooring string like "D32" / "A1" / "E18" to the
* dashed/padded form "D-32" / "A-01" / "E-18" used by the new berths
* schema. If the input doesn't match the bare pattern, returns it
* unchanged so a literal lookup can still hit (handles the case where
* the legacy data already has the dashed form).
*
* Multi-mooring strings ("A3, D30") return the original string -
fix(migration): legacy bare-mooring lookup + port-nimara berth backfill Two issues surfaced when applying the migration to dev: 1. Mooring number format mismatch The legacy NocoDB Interests table writes bare mooring strings ("D32", "B16", "A4"), but the new berths table (mirroring the NocoDB Berths snapshot) uses zero-padded dashed form ("D-32", "B-16", "A-04"). The interest→berth lookup missed every reference. migration-apply.ts now tries the literal value first, then falls back to a normalized form via `normalizeLegacyMooring(raw)`: "D32" -> "D-32" "A4" -> "A-04" "E18" -> "E-18" Multi-mooring strings ("A3, D30") are left as-is so they surface in the warnings list for human review rather than silently picking one. 2. port-nimara only had the 12 hand-rolled seed berths, not the 117- berth NocoDB snapshot The mobile-foundation seed only places those 12 in port-nimara; the 117-berth snapshot was added later but only seeded into Marina Azzurra (the secondary test port). Migrated interests reference moorings well beyond A-01..D-03, so most lookups failed. New scripts/load-berths-to-port-nimara.ts: idempotently loads any missing snapshot berths into port-nimara without disturbing the existing 12 (skips moorings that already exist). Run once; subsequent runs no-op. Result of full migration run on dev: 237 clients inserted (out of 245 total — 8 from prior seed) 406 contacts, 52 addresses, 38 yachts, 252 interests 27 interest→berth links resolved (only 13 source rows had a Berth field set in NocoDB to begin with — most legacy interests are early inquiries with no berth assignment) 1 unresolved warning: source=277 has multi-mooring "A3, D30" Verified in UI: /port-nimara/clients shows real names (John-michael Seelye, Reza Amjad, Etiennette Clamouze, …) /port-nimara/clients/<id> renders contacts (gmail.com addresses, E.164 phones), tab counts (Interests N, Yachts N), pipeline summary Dashboard: 245 clients, 266 active interests, $46.5M pipeline value Pipeline funnel chart now shows real distribution (180 Open, 45 EOI Signed, dropoff through stages) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 21:05:11 +02:00
* those need human review and we don't want to silently pick one half.
*/
function normalizeLegacyMooring(raw: string): string {
// Bare letter+digits, e.g. "D32"
const m = /^([A-E])(\d{1,3})$/i.exec(raw.trim());
if (!m) return raw;
const letter = m[1]!.toUpperCase();
const num = parseInt(m[2]!, 10);
return `${letter}-${num.toString().padStart(2, '0')}`;
}
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
export interface ApplyResult {
applyId: string;
clientsInserted: number;
clientsSkipped: number;
contactsInserted: number;
addressesInserted: number;
yachtsInserted: number;
interestsInserted: number;
interestsSkipped: number;
documentsInserted: number;
documentsSkipped: number;
documentSignersInserted: number;
residentialClientsInserted: number;
residentialClientsSkipped: number;
residentialInterestsInserted: number;
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
warnings: string[];
}
export interface ApplyOptions {
port: { id: string; slug: string };
applyId: string;
/** Set to true for the "preview the writes" mode - runs every read but
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
* rolls back inserts. Useful for verifying mappings before committing. */
rehearsal?: boolean;
appliedBy?: string;
}
/**
* Look up an existing migration link for a (sourceId, targetType) pair.
* Returns the existing target entity id if already linked.
*/
async function resolveExistingLink(
sourceId: number,
targetEntityType:
| 'client'
| 'interest'
| 'yacht'
| 'address'
| 'document'
| 'residential_client'
| 'residential_interest',
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
): Promise<string | null> {
const rows = await db
.select({ id: migrationSourceLinks.targetEntityId })
.from(migrationSourceLinks)
.where(
and(
eq(migrationSourceLinks.sourceSystem, SOURCE_SYSTEM),
eq(migrationSourceLinks.sourceId, String(sourceId)),
eq(migrationSourceLinks.targetEntityType, targetEntityType),
),
)
.limit(1);
return rows[0]?.id ?? null;
}
/** Find the first sourceId in a cluster that's already linked to a client,
* if any. The cluster might be larger than the previously-applied set if
* the dedup algorithm collapsed an extra duplicate this run. */
async function resolveExistingClusterClient(sourceIds: number[]): Promise<string | null> {
if (sourceIds.length === 0) return null;
const rows = await db
.select({ id: migrationSourceLinks.targetEntityId })
.from(migrationSourceLinks)
.where(
and(
eq(migrationSourceLinks.sourceSystem, SOURCE_SYSTEM),
inArray(migrationSourceLinks.sourceId, sourceIds.map(String)),
eq(migrationSourceLinks.targetEntityType, 'client'),
),
)
.limit(1);
return rows[0]?.id ?? null;
}
/** Apply a single PlannedClient - returns `{clientId, inserted}` so the
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
* caller can wire interests against the (possibly pre-existing) record. */
async function applyClient(
planned: PlannedClient,
opts: ApplyOptions,
result: ApplyResult,
): Promise<{ clientId: string; inserted: boolean }> {
// Idempotency: if any source row in the cluster already mapped to a client,
// reuse that record.
const existing = await resolveExistingClusterClient(planned.sourceIds);
if (existing) {
result.clientsSkipped += 1;
return { clientId: existing, inserted: false };
}
if (opts.rehearsal) {
// Simulate an insert without writing - used for the preview path.
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
return { clientId: `rehearsal-${planned.tempId}`, inserted: true };
}
// surnameToken is on the planned object (used by the dedup blocking
// index inside the transform) but not in the clients schema - runtime
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
// dedup re-derives it from fullName when needed. Drop it on insert.
const [inserted] = await db
.insert(clients)
.values({
portId: opts.port.id,
fullName: planned.fullName,
nationalityIso: planned.countryIso ?? null,
preferredContactMethod: planned.preferredContactMethod ?? null,
source: planned.source ?? null,
})
.returning({ id: clients.id });
if (!inserted) throw new Error('Client insert returned no row');
const clientId = inserted.id;
// Record idempotency links - one per source row in the cluster.
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
await db.insert(migrationSourceLinks).values(
planned.sourceIds.map((sid) => ({
sourceSystem: SOURCE_SYSTEM,
sourceId: String(sid),
targetEntityType: 'client' as const,
targetEntityId: clientId,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
})),
);
// Contacts: bulk insert; mark first email + first phone as primary.
if (planned.contacts.length > 0) {
let primaryEmailSet = false;
let primaryPhoneSet = false;
const contactRows = planned.contacts.map((ct) => {
let isPrimary = false;
if (ct.isPrimary) {
if (ct.channel === 'email' && !primaryEmailSet) {
isPrimary = true;
primaryEmailSet = true;
} else if ((ct.channel === 'phone' || ct.channel === 'whatsapp') && !primaryPhoneSet) {
isPrimary = true;
primaryPhoneSet = true;
}
}
return {
clientId,
channel: ct.channel,
value: ct.value,
valueE164: ct.valueE164 ?? null,
valueCountry: ct.valueCountry ?? null,
isPrimary,
};
});
await db.insert(clientContacts).values(contactRows);
result.contactsInserted += contactRows.length;
}
// Addresses: bulk insert; first is marked primary if multiple. Note the
// schema requires portId on every address row in addition to clientId.
if (planned.addresses.length > 0) {
const addressRows = planned.addresses.map((a, idx) => ({
clientId,
portId: opts.port.id,
streetAddress: a.streetAddress ?? null,
city: a.city ?? null,
countryIso: a.countryIso ?? null,
isPrimary: idx === 0,
}));
await db.insert(clientAddresses).values(addressRows);
result.addressesInserted += addressRows.length;
}
result.clientsInserted += 1;
return { clientId, inserted: true };
}
/** Apply a single PlannedInterest - looks up its client + berth + yacht and
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
* inserts the interest row, plus a yacht stub if a yacht name is present. */
async function applyInterest(
planned: PlannedInterest,
tempIdToClientId: Map<string, string>,
mooringToBerthId: Map<string, string>,
opts: ApplyOptions,
result: ApplyResult,
): Promise<void> {
// Idempotency: skip if this source row already created an interest.
const existing = await resolveExistingLink(planned.sourceId, 'interest');
if (existing) {
result.interestsSkipped += 1;
return;
}
const clientId = tempIdToClientId.get(planned.clientTempId);
if (!clientId) {
result.warnings.push(
`Interest source=${planned.sourceId} references unknown client tempId=${planned.clientTempId} - skipped`,
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
);
return;
}
let berthId: string | null = null;
if (planned.berthMooringNumber) {
fix(migration): legacy bare-mooring lookup + port-nimara berth backfill Two issues surfaced when applying the migration to dev: 1. Mooring number format mismatch The legacy NocoDB Interests table writes bare mooring strings ("D32", "B16", "A4"), but the new berths table (mirroring the NocoDB Berths snapshot) uses zero-padded dashed form ("D-32", "B-16", "A-04"). The interest→berth lookup missed every reference. migration-apply.ts now tries the literal value first, then falls back to a normalized form via `normalizeLegacyMooring(raw)`: "D32" -> "D-32" "A4" -> "A-04" "E18" -> "E-18" Multi-mooring strings ("A3, D30") are left as-is so they surface in the warnings list for human review rather than silently picking one. 2. port-nimara only had the 12 hand-rolled seed berths, not the 117- berth NocoDB snapshot The mobile-foundation seed only places those 12 in port-nimara; the 117-berth snapshot was added later but only seeded into Marina Azzurra (the secondary test port). Migrated interests reference moorings well beyond A-01..D-03, so most lookups failed. New scripts/load-berths-to-port-nimara.ts: idempotently loads any missing snapshot berths into port-nimara without disturbing the existing 12 (skips moorings that already exist). Run once; subsequent runs no-op. Result of full migration run on dev: 237 clients inserted (out of 245 total — 8 from prior seed) 406 contacts, 52 addresses, 38 yachts, 252 interests 27 interest→berth links resolved (only 13 source rows had a Berth field set in NocoDB to begin with — most legacy interests are early inquiries with no berth assignment) 1 unresolved warning: source=277 has multi-mooring "A3, D30" Verified in UI: /port-nimara/clients shows real names (John-michael Seelye, Reza Amjad, Etiennette Clamouze, …) /port-nimara/clients/<id> renders contacts (gmail.com addresses, E.164 phones), tab counts (Interests N, Yachts N), pipeline summary Dashboard: 245 clients, 266 active interests, $46.5M pipeline value Pipeline funnel chart now shows real distribution (180 Open, 45 EOI Signed, dropoff through stages) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 21:05:11 +02:00
berthId =
mooringToBerthId.get(planned.berthMooringNumber) ??
// The legacy NocoDB Interests table uses bare mooring strings like
// "D32", "B16", whereas the new berths schema (mirroring the NocoDB
// Berths snapshot) uses zero-padded "D-32", "B-16". Try the dashed
// form as a fallback so legacy references resolve correctly.
mooringToBerthId.get(normalizeLegacyMooring(planned.berthMooringNumber)) ??
null;
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
if (!berthId) {
result.warnings.push(
`Interest source=${planned.sourceId} references unknown mooring="${planned.berthMooringNumber}" - interest created without berth link`,
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
);
}
}
// Optional yacht stub: if the legacy row had a yacht name, create a
// minimal yacht record owned by the client. The new schema requires
// currentOwnerType + currentOwnerId.
let yachtId: string | null = null;
if (planned.yachtName) {
const existingYacht = await resolveExistingLink(planned.sourceId, 'yacht');
if (existingYacht) {
yachtId = existingYacht;
} else if (!opts.rehearsal) {
const [y] = await db
.insert(yachts)
.values({
portId: opts.port.id,
name: planned.yachtName,
currentOwnerType: 'client',
currentOwnerId: clientId,
status: 'active',
})
.returning({ id: yachts.id });
if (y) {
yachtId = y.id;
await db.insert(migrationSourceLinks).values({
sourceSystem: SOURCE_SYSTEM,
sourceId: String(planned.sourceId),
targetEntityType: 'yacht' as const,
targetEntityId: y.id,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
});
result.yachtsInserted += 1;
}
}
}
if (opts.rehearsal) {
result.interestsInserted += 1;
return;
}
const [iRow] = await db
.insert(interests)
.values({
portId: opts.port.id,
clientId,
berthId,
yachtId,
pipelineStage: planned.pipelineStage,
leadCategory: planned.leadCategory,
source: planned.source,
notes: planned.notes,
documensoId: planned.documensoId,
dateEoiSent: planned.dateEoiSent ? new Date(planned.dateEoiSent) : null,
dateEoiSigned: planned.dateEoiSigned ? new Date(planned.dateEoiSigned) : null,
dateContractSent: planned.dateContractSent ? new Date(planned.dateContractSent) : null,
dateContractSigned: planned.dateContractSigned ? new Date(planned.dateContractSigned) : null,
dateDepositReceived: planned.dateDepositReceived
? new Date(planned.dateDepositReceived)
: null,
dateLastContact: planned.dateLastContact ? new Date(planned.dateLastContact) : null,
})
.returning({ id: interests.id });
if (!iRow) throw new Error('Interest insert returned no row');
await db.insert(migrationSourceLinks).values({
sourceSystem: SOURCE_SYSTEM,
sourceId: String(planned.sourceId),
targetEntityType: 'interest' as const,
targetEntityId: iRow.id,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
});
result.interestsInserted += 1;
}
/**
* Apply a single PlannedDocument - looks up the parent interest's id from
* the migration ledger, materializes a documents row, and inserts the
* signer rows. Idempotent via target_entity_type='document'.
*/
async function applyDocument(
planned: PlannedDocument,
tempIdToClientId: Map<string, string>,
opts: ApplyOptions,
result: ApplyResult,
): Promise<void> {
const existing = await resolveExistingLink(planned.sourceId, 'document');
if (existing) {
result.documentsSkipped += 1;
return;
}
const interestId = await resolveExistingLink(planned.sourceId, 'interest');
if (!interestId) {
result.warnings.push(
`Document source=${planned.sourceId} cannot resolve parent interest - skipped (interest must apply first)`,
);
return;
}
const clientId = tempIdToClientId.get(planned.clientTempId);
if (!clientId) {
result.warnings.push(
`Document source=${planned.sourceId} references unknown client tempId=${planned.clientTempId} - skipped`,
);
return;
}
if (opts.rehearsal) {
result.documentsInserted += 1;
result.documentSignersInserted += planned.signers.length;
return;
}
const [docRow] = await db
.insert(documents)
.values({
portId: opts.port.id,
interestId,
clientId,
documentType: planned.documentType,
title: planned.title,
status: planned.status,
documensoId: planned.documensoId,
isManualUpload: false,
notes: planned.notes,
createdBy: opts.appliedBy ?? 'migration',
})
.returning({ id: documents.id });
if (!docRow) throw new Error('Document insert returned no row');
await db.insert(migrationSourceLinks).values({
sourceSystem: SOURCE_SYSTEM,
sourceId: String(planned.sourceId),
targetEntityType: 'document' as const,
targetEntityId: docRow.id,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
});
if (planned.signers.length > 0) {
await db.insert(documentSigners).values(
planned.signers.map((s) => ({
documentId: docRow.id,
signerName: s.signerName,
signerEmail: s.signerEmail,
signerRole: s.signerRole,
signingOrder: s.signingOrder,
status: s.status,
signedAt: s.signedAt ? new Date(s.signedAt) : null,
signingUrl: s.signingUrl,
embeddedUrl: s.embeddedUrl,
})),
);
result.documentSignersInserted += planned.signers.length;
}
result.documentsInserted += 1;
}
/**
* Apply a single PlannedResidentialClient - creates a residential_clients
* row plus a default residential_interests row at pipeline_stage='new'
* so the lead surfaces in the residential funnel. Two ledger entries
* record both targets.
*/
async function applyResidentialClient(
planned: PlannedResidentialClient,
opts: ApplyOptions,
result: ApplyResult,
): Promise<void> {
const existingClient = await resolveExistingLink(planned.sourceId, 'residential_client');
if (existingClient) {
result.residentialClientsSkipped += 1;
return;
}
if (opts.rehearsal) {
result.residentialClientsInserted += 1;
result.residentialInterestsInserted += 1;
return;
}
// Wrap the three writes in a transaction so a partial failure (e.g. the
// residential_interests insert throws) does NOT leave an orphan
// residential_clients row. Without the wrap, a later --apply re-run
// would not see a ledger entry for the orphan and would happily insert
// a duplicate residential_clients row.
await db.transaction(async (tx) => {
const [resClient] = await tx
.insert(residentialClients)
.values({
portId: opts.port.id,
fullName: planned.fullName,
email: planned.email,
phone: planned.phoneE164,
phoneE164: planned.phoneE164,
phoneCountry: planned.phoneCountry,
placeOfResidence: planned.placeOfResidence,
placeOfResidenceCountryIso: planned.placeOfResidenceCountryIso,
source: planned.source,
notes: planned.notes,
status: 'prospect',
})
.returning({ id: residentialClients.id });
if (!resClient) throw new Error('Residential client insert returned no row');
const [resInterest] = await tx
.insert(residentialInterests)
.values({
portId: opts.port.id,
residentialClientId: resClient.id,
pipelineStage: 'new',
source: planned.source,
notes: planned.notes,
dateFirstContact: planned.dateFirstContact ? new Date(planned.dateFirstContact) : null,
dateLastContact: planned.dateFirstContact ? new Date(planned.dateFirstContact) : null,
})
.returning({ id: residentialInterests.id });
if (!resInterest) throw new Error('Residential interest insert returned no row');
// Two ledger entries - one per target - both keyed on the same legacy
// sourceId. Keeps re-runs idempotent on either target type.
await tx.insert(migrationSourceLinks).values([
{
sourceSystem: 'nocodb_residential_interests',
sourceId: String(planned.sourceId),
targetEntityType: 'residential_client' as const,
targetEntityId: resClient.id,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
},
{
sourceSystem: 'nocodb_residential_interests',
sourceId: String(planned.sourceId),
targetEntityType: 'residential_interest' as const,
targetEntityId: resInterest.id,
appliedId: opts.applyId,
...(opts.appliedBy ? { appliedBy: opts.appliedBy } : {}),
},
]);
});
result.residentialClientsInserted += 1;
result.residentialInterestsInserted += 1;
}
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
/**
* Top-level apply driver. Walks the plan once, building the
* tempIdclientId map as it goes, then walks interests with that map.
*/
export async function applyPlan(plan: MigrationPlan, opts: ApplyOptions): Promise<ApplyResult> {
const result: ApplyResult = {
applyId: opts.applyId,
clientsInserted: 0,
clientsSkipped: 0,
contactsInserted: 0,
addressesInserted: 0,
yachtsInserted: 0,
interestsInserted: 0,
interestsSkipped: 0,
documentsInserted: 0,
documentsSkipped: 0,
documentSignersInserted: 0,
residentialClientsInserted: 0,
residentialClientsSkipped: 0,
residentialInterestsInserted: 0,
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
warnings: [],
};
// 1. Clients (and their contacts/addresses)
const tempIdToClientId = new Map<string, string>();
for (const planned of plan.clients) {
const { clientId } = await applyClient(planned, opts, result);
tempIdToClientId.set(planned.tempId, clientId);
}
// 2. Build mooring→berthId lookup once, scoped to this port.
const berthRows = await db
.select({ id: berths.id, mooringNumber: berths.mooringNumber })
.from(berths)
.where(eq(berths.portId, opts.port.id));
const mooringToBerthId = new Map(berthRows.map((b) => [b.mooringNumber, b.id]));
// 3. Interests (and yacht stubs)
for (const planned of plan.interests) {
await applyInterest(planned, tempIdToClientId, mooringToBerthId, opts, result);
}
// 4. Documents (depend on interests being applied first - applyDocument
// looks up the new interest_id via the migration ledger).
for (const planned of plan.documents) {
await applyDocument(planned, tempIdToClientId, opts, result);
}
// 5. Residential leads - independent domain, no dependency on the marina
// apply phase. Each lead gets a residential_clients row + a default
// residential_interests row.
for (const planned of plan.residentialClients) {
await applyResidentialClient(planned, opts, result);
}
feat(dedup): wire --apply path for NocoDB migration Completes the migration script's apply phase, which was stubbed at the P3 ship to defer until after the runtime surfaces (P2) and the comms safety net were in place. Both prerequisites just landed on main, so this unblocks the actual data import. src/lib/dedup/migration-apply.ts (new): Idempotent apply driver. Walks the MigrationPlan, inserting clients, contacts, addresses, yacht stubs, and interests, threading every insert through the migration_source_links ledger so re-runs against the same data are safe. Per-entity transactions (not one giant transaction) so partial-failure resumption is just "run again." Per-entity behavior: - clients: idempotent on (source_system, source_id, target_type=client) across the entire dedup cluster — if any source row already maps to a client, reuse that record. - contacts: bulk insert, primary email + primary phone independent. - addresses: bulk insert, port_id required (schema enforces it), first address marked primary when multiple. - yachts: minimal stub when the legacy interest had a yachtName, currentOwnerType=client + currentOwnerId=migrated client. Linked via migration_source_links target_type=yacht. - interests: looks up berthId via mooring number, yachtId via the stub above. Carries Documenso ID forward when present. surnameToken from PlannedClient is dropped on insert (it's a dedup blocking-index artifact; runtime dedup re-derives from fullName). scripts/migrate-from-nocodb.ts: - Removes the "not yet implemented" guard for --apply. - Adds EMAIL_REDIRECT_TO precondition gate: --apply errors out unless the env var is set, OR --unsafe-skip-redirect-check is also passed (production cutover only). Refers to docs/operations/outbound-comms-safety.md. - Re-fetches NocoDB at apply time (rather than reading a saved report dir) so the data is always fresh. Re-running is safe via the idempotency ledger. - Resolves target port via --port-slug (or first port if omitted). - Generates a UUID applyId tagged on every link, which pairs with a future --rollback flag. - Apply summary prints inserted/skipped counts per entity type plus the first 20 warnings. Verification: 0 tsc errors, 926/926 vitest passing, lint clean. The actual end-to-end run requires NOCODB_URL + NOCODB_TOKEN in .env which aren't configured in this checkout; that's the operator's next step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:53:04 +02:00
return result;
}