Initial commit: Port Nimara CRM (Layers 0-4)
Some checks failed
Build & Push Docker Images / build-and-push (push) Has been cancelled
Build & Push Docker Images / deploy (push) Has been cancelled
Build & Push Docker Images / lint (push) Has been cancelled

Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-26 11:52:51 +01:00
commit 67d7e6e3d5
572 changed files with 86496 additions and 0 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,859 @@
# Layer 6: Migration & Cutover — Competing Plan (Claude Code)
**Scope:** NocoDB → PostgreSQL data migration script (ETL), MinIO file reorganization, post-migration search index rebuild, user provisioning, smoke testing, DNS cutover, and rollback plan.
**Duration:** 3 days (1 day script development during the week + 2-day weekend cutover)
**Depends on:** L0L5 complete and tested. The new CRM must be fully functional and passing all tests before migration begins.
---
## 1. Baseline Critique
### What's Good
1. **ETL structure is correct** — Extract/Transform/Load/Validate is the right pattern. The folder layout with separate phases is clean.
2. **Rollback plan is conservative and sensible** — Keeping NocoDB running for 30 days, DNS-based rollback. This is the right approach.
3. **Weekend cutover timing** — Low-traffic period, gives time for smoke testing before Monday.
4. **Validation checklist is thorough** — Count comparisons, orphan detection, FK integrity, pipeline distribution, amount sanity checks.
5. **Post-migration manual tasks list** — Clear ownership and timing for human review steps.
6. **NocoDB-MIGRATION-MAPPING.md is comprehensive** — Field-by-field mapping with transformation rules, data quality fixes, and dedup logic is excellent reference material.
### What Needs Fixing
7. **No dry-run mode** — The script has no `--dry-run` flag. You need to run the full ETL pipeline without actually inserting data to catch transformation errors before the real migration. This is critical for confidence.
8. **No delta/incremental support** — The test run happens Wednesday/Thursday but the real migration happens Saturday. Any data entered in NocoDB between Wednesday and Saturday would be missed. Need a way to identify and migrate new/changed records.
9. **Transaction handling too coarse** — Loading all clients in one giant transaction means one bad record rolls back everything. With ~500 clients, a single malformed email could fail the entire batch. Better to use per-record error handling: try each record, log failures, continue.
10. **`rollback.ts` is listed but never defined** — The file tree shows `rollback.ts` but the baseline never describes what it does. The rollback plan is a manual process (DNS switch), not a script.
11. **Missing tsvector search index rebuild** — After loading data, the `search_vector` tsvector columns on clients, interests, and berths need to be populated. The baseline mentions this at 11:00 Saturday but doesn't show the SQL or mechanism.
12. **Synthetic audit logs lack required fields**`ip_address` and `user_agent` should have placeholder values for migration entries (`'0.0.0.0'` and `'migration-script/1.0'`), not be left null.
13. **No post-load berth status reconciliation** — After loading interests with berth_id, some berths should be `under_offer` or `sold` based on their linked interests. The migration should reconcile berth statuses against the loaded interest data, not leave them at whatever NocoDB had (which may be inconsistent).
14. **No file integrity verification** — After MinIO file reorganization, there's no checksum comparison to verify files weren't corrupted during copy.
15. **Extraction is sequential** — With only 4 NocoDB tables, parallel extraction would cut the extraction phase time in half.
16. **AI-assisted file matching referenced from L4 Stream C** — This tool may or may not exist in the blessed plan. The migration script needs a standalone file-matching strategy that doesn't depend on L4's AI features.
17. **Missing migration report** — After the migration completes, there should be a comprehensive report (JSON + human-readable) documenting exactly what was migrated, what was skipped, what needs manual review.
---
## 2. Implementation Plan
### Pre-Migration Script Development (1 day during the week)
#### Migration Script Structure
```
scripts/migrate-nocodb/
├── index.ts # Main orchestrator with CLI flags
├── config.ts # Connection config, constants, thresholds
├── modes.ts # --dry-run, --full, --delta, --validate-only
├── extract/
│ ├── interests.ts # Extract NocoDB Interests (mega-table)
│ ├── berths.ts # Extract NocoDB Berths
│ ├── expenses.ts # Extract NocoDB Expenses
│ ├── invoices.ts # Extract NocoDB Invoices
│ └── files.ts # Scan MinIO bucket for file inventory
├── transform/
│ ├── clients.ts # Interests → clients + client_contacts (dedup)
│ ├── interests.ts # Interests → interests + interest_notes
│ ├── berths.ts # Berths → berths + berth_map_data
│ ├── documents.ts # Interests → documents + document_signers
│ ├── expenses.ts # Expenses → expenses
│ ├── invoices.ts # Invoices → invoices + line_items + invoice_expenses
│ ├── files.ts # MinIO scan → files table records
│ └── synthetic.ts # Audit logs, tags, client_tags
├── load/
│ ├── database.ts # Insert into PostgreSQL (per-record error handling)
│ ├── files.ts # Copy MinIO files to new structure
│ └── search-indexes.ts # Rebuild tsvector search columns
├── validate/
│ ├── counts.ts # Compare record counts source vs target
│ ├── spot-check.ts # Verify specific known records
│ ├── integrity.ts # FK constraint validation, orphan detection
│ ├── status-reconcile.ts # Reconcile berth statuses with loaded interests
│ └── file-integrity.ts # Checksum comparison for reorganized files
├── report.ts # Generate migration report (JSON + text)
└── helpers/
├── parse-dimension.ts # parseDimension() — string → { ft, m }
├── parse-price.ts # parsePrice() — string → number
├── normalize-name.ts # normalizeClientName() — trim, title-case, strip honorifics
├── normalize-category.ts # normalizeCategory() — NocoDB enum → snake_case
├── stage-mapping.ts # NocoDB sales process level → pipeline_stage
└── dedup.ts # Client deduplication logic
```
#### CLI Modes
```typescript
// scripts/migrate-nocodb/index.ts
import { program } from 'commander';
program
.option('--dry-run', 'Run extraction + transformation + validation without loading')
.option('--full', 'Run full migration (extract → transform → load → validate)')
.option('--delta', 'Migrate only records created/updated after last full migration')
.option('--validate-only', 'Run validation checks against already-loaded data')
.option('--skip-files', 'Skip MinIO file reorganization')
.option('--verbose', 'Log every record transformation')
.parse();
```
**`--dry-run`:** Extracts from NocoDB, transforms all records, runs validation on the transformed data (count checks, dedup candidates, data quality warnings), writes report — but never touches PostgreSQL or reorganizes MinIO files. This is the safest way to verify the transformation logic.
**`--delta`:** Queries NocoDB for records where `Updated At > lastMigrationTimestamp`. Only migrates new/changed records. Used for the Wednesday→Saturday gap. New clients from new interests are created; existing clients with new interests get the interest added.
**`--validate-only`:** Runs all validation checks against the already-loaded PostgreSQL data. Useful for post-load spot-checking without re-running the full migration.
#### Extraction (Parallel)
```typescript
// scripts/migrate-nocodb/extract/index.ts
export async function extractAll(): Promise<ExtractedData> {
// Run all 4 extractions in parallel — independent NocoDB API calls
const [interests, berths, expenses, invoices] = await Promise.all([
extractInterests(),
extractBerths(),
extractExpenses(),
extractInvoices(),
]);
// Sequential: file scan depends on nothing but is I/O heavy
const files = await scanMinIOFiles();
return { interests, berths, expenses, invoices, files };
}
```
Each extractor follows the same pattern:
```typescript
export async function extractInterests(): Promise<NocoDB_Interest[]> {
const records: NocoDB_Interest[] = [];
let offset = 0;
const limit = 200; // Larger batch size for faster extraction
while (true) {
const response = await fetch(
`${NOCODB_API}/api/v1/db/data/noco/${projectId}/${TABLE_IDS.interests}?offset=${offset}&limit=${limit}&sort=-CreatedAt`,
{ headers: { 'xc-auth': NOCODB_TOKEN } },
);
if (!response.ok) throw new Error(`NocoDB extraction failed: ${response.status}`);
const data = await response.json();
records.push(...data.list);
logger.info(`Extracted ${records.length} interests...`);
if (data.list.length < limit) break;
offset += limit;
}
return records;
}
```
Delta mode adds a `where` clause:
```typescript
// For --delta mode:
const where = `(UpdatedAt,gt,${lastMigrationTimestamp})`;
const url = `${NOCODB_API}/...?offset=${offset}&limit=${limit}&where=${encodeURIComponent(where)}`;
```
#### Transformation
All transformation functions follow the pattern documented in `NOCODB-MIGRATION-MAPPING.md`. Key additions:
**Client deduplication with confidence scoring:**
```typescript
// scripts/migrate-nocodb/helpers/dedup.ts
interface DedupResult {
/** Unique clients after dedup */
clients: TransformedClient[];
/** Client contacts (deduplicated per client) */
contacts: TransformedClientContact[];
/** Map: NocoDB Interest ID → client UUID */
interestToClientMap: Map<number, string>;
/** Exact email matches that were auto-merged */
autoMerged: Array<{ email: string; interestIds: number[]; clientId: string }>;
/** Fuzzy name matches for manual review */
fuzzyCandidates: Array<{
nameA: string;
nameB: string;
similarity: number;
interestIds: number[];
}>;
}
export function deduplicateClients(interests: NocoDB_Interest[]): DedupResult {
// Pass 1: Group by exact email (case-insensitive, trimmed)
const emailGroups = new Map<string, NocoDB_Interest[]>();
// ... group interests by normalized email
// Pass 2: Within each email group, merge into single client
// Keep earliest created_at, latest updated_at, merge all contacts
// Pass 3: For interests without email, try fuzzy name match
// Levenshtein distance ≤ 2 on normalizeClientName() + same residence
// These are CANDIDATES only — logged to dedup-candidates.csv for manual review
// Pass 4: Remaining unmatched interests → create new clients
}
```
**Pipeline stage mapping** (using exact values from BR-010):
```typescript
// scripts/migrate-nocodb/helpers/stage-mapping.ts
export const STAGE_MAP: Record<string, string> = {
'General Qualified Interest': 'open',
'Specific Qualified Interest': 'details_sent',
'EOI and NDA Sent': 'in_communication',
'Signed EOI and NDA': 'signed_eoi_nda',
'Made Reservation': 'deposit_10pct',
'Contract Negotiation': 'contract',
'Contract Negotiations Finalized': 'contract',
'Contract Signed': 'completed',
};
export function mapPipelineStage(nocodbStage: string | null): string {
if (!nocodbStage || nocodbStage.trim() === '') return 'open';
const mapped = STAGE_MAP[nocodbStage.trim()];
if (!mapped) {
logger.warn(`Unknown pipeline stage: "${nocodbStage}" — defaulting to "open"`);
return 'open';
}
return mapped;
}
```
#### Load Phase (Per-Record Error Handling)
```typescript
// scripts/migrate-nocodb/load/database.ts
interface LoadResult {
entity: string;
total: number;
success: number;
failed: number;
errors: Array<{ recordIndex: number; nocodbId: number | string; error: string }>;
}
export async function loadClients(clients: TransformedClient[]): Promise<LoadResult> {
const result: LoadResult = {
entity: 'clients',
total: clients.length,
success: 0,
failed: 0,
errors: [],
};
for (let i = 0; i < clients.length; i++) {
try {
await db.insert(clientsTable).values(clients[i]);
result.success++;
} catch (error) {
result.failed++;
result.errors.push({
recordIndex: i,
nocodbId: clients[i].metadata?.nocodb_id ?? 'unknown',
error: error instanceof Error ? error.message : String(error),
});
logger.error(`Failed to load client ${i}: ${error}`);
// Continue — don't abort the entire batch
}
}
return result;
}
```
**Load order (respects FK constraints, matches NOCODB-MIGRATION-MAPPING.md):**
```typescript
export async function loadAll(data: TransformedData): Promise<MigrationReport> {
const results: LoadResult[] = [];
// Phase 1: Independent entities (no FK dependencies on each other)
results.push(await loadClients(data.clients));
results.push(await loadClientContacts(data.clientContacts));
results.push(await loadBerths(data.berths));
results.push(await loadBerthMapData(data.berthMapData));
// Phase 2: Entities that reference Phase 1
results.push(await loadInterests(data.interests));
results.push(await loadInterestNotes(data.interestNotes));
results.push(await loadDocuments(data.documents));
results.push(await loadDocumentSigners(data.documentSigners));
// Phase 3: Financial entities
results.push(await loadExpenses(data.expenses));
results.push(await loadFiles(data.files));
results.push(await loadInvoices(data.invoices));
results.push(await loadInvoiceLineItems(data.invoiceLineItems));
results.push(await loadInvoiceExpenses(data.invoiceExpenses));
// Phase 4: Synthetic/derived records
results.push(await loadTags(data.tags));
results.push(await loadClientTags(data.clientTags));
results.push(await loadInterestTags(data.interestTags));
results.push(await loadAuditLogs(data.syntheticAuditLogs));
return generateReport(results);
}
```
#### Search Index Rebuild
```typescript
// scripts/migrate-nocodb/load/search-indexes.ts
export async function rebuildSearchIndexes(): Promise<void> {
logger.info('Rebuilding tsvector search indexes...');
// Clients: search on full_name, company_name, yacht_name, email, phone
await db.execute(sql`
UPDATE clients SET search_vector =
setweight(to_tsvector('english', coalesce(full_name, '')), 'A') ||
setweight(to_tsvector('english', coalesce(company_name, '')), 'B') ||
setweight(to_tsvector('english', coalesce(yacht_name, '')), 'C')
WHERE port_id = ${PORT_NIMARA_ID}
`);
// Berths: search on mooring_number, area, berth_type
await db.execute(sql`
UPDATE berths SET search_vector =
setweight(to_tsvector('english', coalesce(mooring_number, '')), 'A') ||
setweight(to_tsvector('english', coalesce(area, '')), 'B') ||
setweight(to_tsvector('english', coalesce(berth_type, '')), 'C')
WHERE port_id = ${PORT_NIMARA_ID}
`);
// Interests: search on client name (via join), berth number, notes
// This uses the service layer's existing search vector update function
await db.execute(sql`
UPDATE interests i SET search_vector =
setweight(to_tsvector('english', coalesce(c.full_name, '')), 'A') ||
setweight(to_tsvector('english', coalesce(b.mooring_number, '')), 'B')
FROM clients c LEFT JOIN berths b ON b.id = i.berth_id
WHERE c.id = i.client_id AND i.port_id = ${PORT_NIMARA_ID}
`);
// Also rebuild pg_trgm indexes (these are handled by GIN indexes, just need data present)
logger.info('Search indexes rebuilt.');
}
```
#### Berth Status Reconciliation
```typescript
// scripts/migrate-nocodb/validate/status-reconcile.ts
export async function reconcileBerthStatuses(): Promise<ReconcileReport> {
// After loading all interests, verify berth statuses make sense:
// 1. Berths with active interests (non-archived) in stages beyond 'details_sent'
// should be 'under_offer' or 'sold'
// 2. Berths with completed interests (stage = 'completed') should be 'sold'
// 3. Berths with no linked interests should be 'available' or 'maintenance'
const mismatches: Array<{
berthId: string;
currentStatus: string;
expectedStatus: string;
reason: string;
}> = [];
// Query: berths where status is 'available' but has active interests
const availableWithInterests = await db.execute(sql`
SELECT b.id, b.mooring_number, b.status, i.pipeline_stage
FROM berths b
INNER JOIN interests i ON i.berth_id = b.id AND i.archived_at IS NULL
WHERE b.port_id = ${PORT_NIMARA_ID}
AND b.status = 'available'
AND i.pipeline_stage NOT IN ('open', 'details_sent')
`);
for (const row of availableWithInterests.rows) {
mismatches.push({
berthId: row.id,
currentStatus: 'available',
expectedStatus: 'under_offer',
reason: `Has active interest at stage "${row.pipeline_stage}"`,
});
}
// Log mismatches but don't auto-fix — Matt reviews and decides
return { mismatches, autoFixApplied: false };
}
```
#### File Reorganization (with Integrity Check)
```typescript
// scripts/migrate-nocodb/load/files.ts
export async function reorganizeFiles(
fileMappings: FileMapping[],
options: { dryRun: boolean },
): Promise<FileReorgResult> {
const result: FileReorgResult = { copied: 0, failed: 0, checksumMismatches: 0, errors: [] };
for (const mapping of fileMappings) {
try {
if (options.dryRun) {
// Just verify source exists
await minioClient.statObject(mapping.sourceBucket, mapping.sourcePath);
result.copied++;
continue;
}
// 1. Get source object metadata (for checksum)
const sourceStat = await minioClient.statObject(mapping.sourceBucket, mapping.sourcePath);
// 2. Copy to new location
await minioClient.copyObject(
mapping.targetBucket,
mapping.targetPath,
`/${mapping.sourceBucket}/${mapping.sourcePath}`,
);
// 3. Verify copy integrity (size match)
const targetStat = await minioClient.statObject(mapping.targetBucket, mapping.targetPath);
if (sourceStat.size !== targetStat.size) {
result.checksumMismatches++;
result.errors.push({
file: mapping.sourcePath,
error: `Size mismatch: source=${sourceStat.size}, target=${targetStat.size}`,
});
continue;
}
result.copied++;
} catch (error) {
result.failed++;
result.errors.push({
file: mapping.sourcePath,
error: error instanceof Error ? error.message : String(error),
});
// Continue — don't fail entire file reorganization for one file
}
}
return result;
}
```
**File-to-client matching (standalone, no L4 AI dependency):**
```typescript
// Simple rule-based matching — no AI needed for this data volume
function matchFileToClient(
filePath: string,
clients: TransformedClient[],
interests: TransformedInterest[],
): string | null {
const filename = path.basename(filePath).toLowerCase();
// Rule 1: If path contains a known client name or email
for (const client of clients) {
const nameLower = client.full_name.toLowerCase().replace(/\s+/g, '_');
if (filePath.toLowerCase().includes(nameLower)) return client.id;
}
// Rule 2: If path contains a known berth number + interest context
// e.g., "eoi_A12_JohnDoe.pdf" → find interest for berth A12
// Rule 3: If file is a receipt linked from NocoDB expense
// Already matched during expense transformation
// No match → return null (logged to unmatched-files.csv)
return null;
}
```
#### Migration Report
```typescript
// scripts/migrate-nocodb/report.ts
interface MigrationReport {
timestamp: string;
duration: string;
mode: 'dry-run' | 'full' | 'delta';
extraction: {
interests: number;
berths: number;
expenses: number;
invoices: number;
files: number;
};
transformation: {
clients: { total: number; deduped: number; autoMerged: number; fuzzyCandidates: number };
contacts: number;
interests: number;
berths: number;
documents: number;
expenses: number;
invoices: number;
lineItems: number;
files: { matched: number; unmatched: number };
};
load: {
results: LoadResult[];
totalSuccess: number;
totalFailed: number;
};
validation: {
countChecks: ValidationCheck[];
integrityChecks: ValidationCheck[];
statusReconciliation: ReconcileReport;
fileIntegrity: FileReorgResult;
allPassed: boolean;
};
reviewFiles: {
dedupCandidates: string; // path to CSV
unmatchedFiles: string; // path to CSV
warnings: string; // path to CSV
};
}
export function generateReport(data: MigrationReport): void {
// Write JSON report
fs.writeFileSync('migration-report.json', JSON.stringify(data, null, 2));
// Write human-readable summary
const summary = formatTextReport(data);
fs.writeFileSync('migration-report.txt', summary);
console.log(summary);
}
```
#### Synthetic Audit Logs
```typescript
// scripts/migrate-nocodb/transform/synthetic.ts
export function generateSyntheticAuditLogs(
clients: TransformedClient[],
interests: TransformedInterest[],
berths: TransformedBerth[],
expenses: TransformedExpense[],
invoices: TransformedInvoice[],
): AuditLog[] {
const logs: AuditLog[] = [];
// For each migrated entity, create a synthetic "create" audit entry
for (const client of clients) {
logs.push({
port_id: PORT_NIMARA_ID,
user_id: 'system-migration',
action: 'create',
entity_type: 'client',
entity_id: client.id,
field_changed: null,
old_value: null,
new_value: null,
ip_address: '0.0.0.0', // Placeholder for migration
user_agent: 'migration-script/1.0',
metadata: { source: 'nocodb_migration', nocodb_id: client.metadata?.nocodb_id },
created_at: client.created_at, // Use original creation timestamp
});
}
// Same for interests, berths, expenses, invoices...
return logs;
}
```
---
### Cutover Weekend Execution
#### Friday Evening — Freeze & Final Snapshot
| Time | Action | Owner | Notes |
| ----- | ----------------------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------ |
| 17:00 | Email team: "CRM maintenance this weekend. Stop entering data in NocoDB by 18:00." | Matt | Give 1-hour warning |
| 18:00 | **Data freeze.** No new entries in NocoDB after this point. | Team | Record exact freeze timestamp |
| 18:15 | Run `--delta` migration (Wednesday→Friday changes) against test DB. Verify. | Script | Catches any gap from test run |
| 18:30 | Take final NocoDB API snapshot: extract all 4 tables to JSON files | Script | `node scripts/migrate-nocodb --mode=extract --output=./snapshots/final/` |
| 18:45 | Take final MinIO file inventory (list all objects with sizes) | Script | Save as `final-file-inventory.csv` |
| 19:00 | Backup current PostgreSQL (empty CRM schema — just in case) | pg_dump | Save as `pre-migration-backup.sql.gz` |
| 19:15 | Run `--dry-run` on final snapshot | Script | Verify no new transformation errors |
| 19:30 | Review dry-run report. If clean → done for Friday. If issues → fix scripts tonight. | Matt + Dev | Go/no-go decision |
#### Saturday Morning — Data Migration
| Time | Action | Owner | Verify |
| ----- | ---------------------------------------------------------------- | ------ | -------------------------------------- |
| 08:00 | Run full migration: `node scripts/migrate-nocodb --mode=full` | Script | Watch console output |
| 08:05 | — Extraction completes (~2 min for this data volume) | | Record counts match expected |
| 08:10 | — Transformation completes (~1 min) | | Dedup candidates logged |
| 08:15 | — Loading completes (~5 min) | | Success/failure counts |
| 08:20 | — Search indexes rebuilt | | tsvector columns populated |
| 08:25 | — Validation suite runs | | All checks pass |
| 08:30 | Review migration report | Matt | Check failed records, dedup candidates |
| 08:45 | Review `dedup-candidates.csv` — approve or reject fuzzy merges | Matt | ~5-10 candidates expected |
| 09:00 | Run berth status reconciliation | Script | Review mismatches |
| 09:15 | Fix any identified issues (manual data corrections) | Matt | Direct SQL or API calls |
| 09:30 | Run MinIO file reorganization (if not skipped) | Script | File count and size verification |
| 10:00 | Review `unmatched-files.csv` — manually assign or accept as misc | Matt | ~10-20 files expected |
| 10:15 | Run `--validate-only` — final validation pass | Script | All checks green |
| 10:30 | **Migration data phase complete** | | Go/no-go for user setup |
**Total estimated time: ~2.5 hours** (generous — actual data migration should take 10-15 minutes for this volume: ~500 interests, ~200 berths, ~2000 expenses, ~500 invoices)
#### Saturday Afternoon — System Setup
| Time | Action | Owner | Verify |
| ----- | -------------------------------------------------------------------------------------------------------------- | ------ | ----------------------------- |
| 13:00 | Create user accounts in Better Auth via admin panel | Matt | Users listed |
| 13:15 | Assign users to Port Nimara with roles (super_admin for Matt, director for ops, sales_manager/agent for sales) | Matt | Permission check |
| 13:30 | Send "set password" emails to all users | System | Emails received |
| 14:00 | Configure Poste.io SMTP connection in admin settings | Matt | Test email sends successfully |
| 14:30 | Verify Documenso API connection | Matt | Health check passes |
| 15:00 | Verify MinIO connection + presigned URLs work | Matt | Test file download |
| 15:30 | Configure system_settings: berth status rules, reminder defaults, EOI reminder settings | Matt | Settings saved |
| 16:00 | Configure currency rates (USD primary, ECD pegged at 2.70) | Matt | Rates display on berth page |
| 16:30 | Verify public API: `GET /api/public/berths` returns correct data | Matt | JSON response with berth data |
| 17:00 | **System setup complete** | | Ready for smoke testing |
#### Sunday — Smoke Test & Go Live
| Time | Action | Owner | Verify |
| ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- | ------------------------------------------------- |
| 09:00 | **Smoke 1: Login & Dashboard** — Login as Matt → dashboard loads → widgets show correct data → pipeline summary matches expected | Matt | Data present and correct |
| 09:20 | **Smoke 2: Client Browse** — Navigate to clients → scroll list → open a known high-profile client → verify contacts, notes, interests, files all migrated | Matt | All client data present |
| 09:40 | **Smoke 3: Interest Pipeline** — Open pipeline board → verify interests at correct stages → click interest → verify linked berth, milestones, EOI documents | Matt | Stages correct |
| 10:00 | **Smoke 4: Berth Data** — Navigate to berths → select berth → verify specs (LOA, beam, draft, price) → check berth map renders | Matt | Specs accurate |
| 10:20 | **Smoke 5: Financial Data** — Navigate to expenses → verify count and amounts → open an invoice → verify line items and total | Matt | Amounts correct |
| 10:40 | **Smoke 6: Create New Record** — Create a test client → create interest → link berth → verify audit log → delete test data | Matt | New records work |
| 11:00 | **Smoke 7: Email** — Compose test email → send → verify delivery → check thread | Matt | Email works |
| 11:20 | **Smoke 8: Search** — Cmd+K → search for known client name → verify results → search for berth number → verify | Matt | Search works |
| 11:40 | **Smoke 9: Notifications** — Trigger a test reminder → check notification bell → verify toast | Matt | Notifications work |
| 12:00 | Lunch break | | All smokes passed |
| 13:00 | Update website public API endpoint to point at new CRM | Matt | Website berth map updates |
| 13:30 | Update DNS/nginx to point CRM domain at new application | Matt | CRM accessible at production URL |
| 14:00 | Final verification: access CRM at production URL, run through one more smoke test | Matt | All green |
| 14:30 | **GO LIVE DECISION** | Matt | If all pass → live. If critical issue → rollback. |
#### Monday — Day 1 Production
| Time | Action | Owner |
| ----------- | ----------------------------------------------------------------------------- | ----- |
| 07:30 | Check system health dashboard, review overnight alerts | Matt |
| 08:00 | Team logs in, sets passwords | Team |
| 08:30 | Quick 15-min walkthrough of new system for team (screen share) | Matt |
| 08:3017:00 | Dev on standby for rapid hotfixes | Dev |
| 12:00 | Mid-day check: review audit logs for errors, check BullMQ queues | Matt |
| 17:00 | End-of-day: review full day's audit logs, check data integrity, note feedback | Matt |
---
### Rollback Plan
#### Decision Matrix
| Severity | Example | Action | Max Fix Time |
| -------------- | ---------------------------------------------------- | -------------------------------------------- | ---------------- |
| Cosmetic | UI alignment, wrong color, missing icon | Fix and deploy. No rollback. | — |
| Minor data | One client missing a note, wrong category on expense | Fix via direct SQL. No rollback. | 30 min |
| Feature broken | Can't create invoices, email not sending | Attempt fix. If unfixable → rollback. | 2 hours |
| Data loss | Client records missing, expense amounts wrong | **Immediate rollback.** Investigate offline. | 0 (rollback now) |
| Auth broken | Can't login, session issues, permission errors | Attempt fix. If unfixable → rollback. | 1 hour |
| Critical | Database corruption, application won't start | **Immediate rollback.** | 0 (rollback now) |
#### Rollback Execution Steps
1. **Switch DNS/nginx** back to NocoDB system (update nginx upstream, reload — takes 30 seconds)
2. **Notify team** via email: "Use the old system until further notice. Do NOT use the new CRM."
3. **NocoDB was never turned off** — it's running on separate infrastructure with pre-freeze data
4. **Any data entered in new CRM after go-live** — document what was entered and manually re-enter in NocoDB
5. **Investigate the issue** in the new CRM (it's still running, just not DNS-pointed)
6. **Fix, re-test, schedule second cutover** (next weekend)
**Key safety:** NocoDB runs in parallel for 30 days post-migration. DNS switching is instant and reversible. No data is destroyed.
---
### Post-Migration Cleanup (Week 1)
| Task | When | Owner |
| ------------------------------------------------------------- | ------- | ----- |
| Verify all users have logged in and set passwords | Day 2 | Matt |
| Review audit logs for errors or unusual patterns | Day 23 | Matt |
| Collect team feedback on new system (quick survey or chat) | Day 3 | Matt |
| Address quick-fix issues from feedback | Day 35 | Dev |
| Monitor BullMQ queues daily for failed jobs | Daily | Matt |
| Verify automated backup ran at 02:00 and is downloadable | Day 2 | Matt |
| Delete old MinIO file paths (originals that were reorganized) | Day 7 | Dev |
| Run full database backup and download a copy off-server | Day 7 | Matt |
| Confirm Google Calendar sync works for any connected user | Day 3 | Matt |
| Review scratchpad/notes workflow with sales team | Day 5 | Matt |
### Post-Migration Cleanup (Week 24)
| Task | When | Owner |
| ------------------------------------------------------ | ------- | ----- |
| Monitor system health dashboard weekly | Weekly | Matt |
| Address any remaining feedback items | Ongoing | Dev |
| Verify currency rate auto-refresh is working | Day 14 | Matt |
| Run audit log export for first month | Day 30 | Matt |
| **Decommission NocoDB** (stop container, archive data) | Day 30+ | Matt |
---
## 3. Acceptance Criteria
### Migration Script (AC-L6-01 through AC-L6-10)
1. `--dry-run` mode completes without touching PostgreSQL or MinIO, produces full transformation report
2. `--full` mode extracts all 4 NocoDB tables, transforms, loads, and validates
3. `--delta` mode migrates only records modified after a given timestamp
4. `--validate-only` mode runs all validation checks against existing data
5. Client deduplication correctly merges interests by exact email match
6. Fuzzy name matches logged to `dedup-candidates.csv` for manual review (not auto-merged)
7. Per-record error handling: a single bad record does not fail the entire entity batch
8. Migration report generated (JSON + text) with full statistics
9. Pipeline stages mapped correctly from NocoDB values to BR-010 stages
10. Synthetic audit logs created for all migrated entities with `source: "nocodb_migration"` metadata
### Data Integrity (AC-L6-11 through AC-L6-18)
11. Client count ≤ NocoDB interest count (dedup reduces)
12. Interest count = NocoDB interest count (1:1 mapping)
13. Berth count = NocoDB berth count (1:1 mapping)
14. Expense count = NocoDB expense count (1:1 mapping)
15. Invoice count = NocoDB invoice count (1:1 mapping)
16. Zero orphan records (no FK violations): interests.client_id, interests.berth_id, invoice_expenses
17. tsvector search indexes populated on all clients, interests, and berths
18. Berth status reconciliation report generated — mismatches identified for manual review
### File Migration (AC-L6-19 through AC-L6-22)
19. All MinIO files cataloged in `files` table with correct metadata
20. Files matched to clients where possible, unmatched files logged to CSV
21. File reorganization preserves original file content (size verification)
22. New MinIO path follows `{portSlug}/{entity}/{entityId}/{uuid}.{ext}` convention
### Cutover (AC-L6-23 through AC-L6-28)
23. User accounts created with correct roles and port assignments
24. "Set password" emails delivered to all users
25. Poste.io SMTP connection verified (test email sends)
26. Documenso API connection verified (health check passes)
27. Public API returns correct berth data at production URL
28. All 9 smoke tests pass before go-live decision
### Rollback (AC-L6-29 through AC-L6-31)
29. NocoDB remains running on separate infrastructure throughout migration
30. DNS/nginx rollback tested and documented (can switch in < 1 minute)
31. Rollback decision matrix documented with clear severity criteria
---
## 4. Self-Review Checklist
### Script Quality
- [ ] Migration script runs with `tsx` (TypeScript execution) — no build step needed
- [ ] All NocoDB field names match the actual table structure (verify against NocoDB API explorer)
- [ ] Transformation helpers have unit tests: `parseDimension`, `parsePrice`, `normalizeClientName`, `mapPipelineStage`
- [ ] Error messages include NocoDB record IDs for easy debugging
- [ ] CLI modes are mutually exclusive and well-documented
- [ ] Configuration is entirely environment-variable based (no hardcoded secrets)
### Data Safety
- [ ] No data is deleted from NocoDB at any point
- [ ] MinIO file reorganization copies first, verifies, then deletes originals only on Day 7 (not during migration)
- [ ] `--dry-run` truly doesn't modify any external state
- [ ] Per-record error handling prevents cascade failures
- [ ] All PostgreSQL inserts use the Drizzle ORM (no raw SQL string concatenation)
- [ ] Sensitive fields (email, phone) in audit logs are masked per SECURITY-GUIDELINES.md
### Validation
- [ ] Count validation covers all 6 entity types (clients, contacts, interests, berths, expenses, invoices)
- [ ] Orphan detection covers all FK relationships
- [ ] Spot-check queries are parameterized (not hardcoded record IDs — configurable)
- [ ] Berth status reconciliation runs but doesn't auto-fix (human review)
- [ ] File integrity check compares sizes of source and target
### Process
- [ ] Team notification email drafted and ready
- [ ] Smoke test checklist covers all major features (login, browse, create, search, email, notifications)
- [ ] Rollback procedure documented with timing and responsible parties
- [ ] Post-migration task list assigned to specific owners with dates
- [ ] NocoDB decommission scheduled for Day 30 (not earlier)
---
## Codex Addenda — Merged from Competing Plan Review
### 1. Migration Artifacts as First-Class Outputs
Treat migration artifacts as **first-class outputs**, not transient logs. Every run produces an immutable, timestamped output directory:
```
scripts/migrate-nocodb/output/YYYYMMDD-HHMM/
raw/ # Raw NocoDB JSON/CSV snapshots
normalized/ # Transformed data ready for load
id-maps/ # Source→target ID mappings (JSON)
reports/
extract-counts.json
migration-warnings.csv
dedup-candidates.csv
unmatched-files.csv
validation-report.json
smoke-test.json
```
### 2. ID Maps Outside Application Schema
Source-to-target ID maps are stored in artifact JSON files, **not** as metadata columns in the target schema. The locked schema does not include source-ID columns on most entities. Preserve original references only in `id-maps/*.json` for traceability.
### 3. Dry-Run Timing at Day -5
Start dry-run rehearsals at **Day -5** (not day-of). Time the full cutover script, including file copy, and write the final operator checklist based on actual measured durations. File migration is the slowest step and must be timed in rehearsal.
### 4. Entity-Group Transactions
Each entity group loads in its own transaction. If a group fails, the run stops and the database is reset before retry. Load order:
1. `ports` seed verified
2. `clients``client_contacts`
3. `berths``berth_map_data`
4. `interests``interest_notes`
5. `documents``document_signers`
6. `expenses`
7. `files`
8. `invoices``invoice_line_items``invoice_expenses`
9. `tags` and tag junctions
10. Synthetic `audit_logs`
### 5. Resumable File Copy Manifest
File copy failures should produce a **resumable manifest** so the entire data load does not need to rerun for a small number of file issues. Do not delete original MinIO objects during the migration window — copy first, validate, then schedule cleanup after the go-live safety window.
### 6. Dedup Conflict Handling
Client dedup conflicts are never silently merged beyond the approved rules. Uncertain matches go to `dedup-candidates.csv` for manual review. If a berth reference cannot be resolved, the interest loads with `berth_id = null` and a warning artifact.
### 7. Rollback Semantics
Rollback does **not** attempt to reverse-write into NocoDB. It restores traffic to the old system and treats any new-system data as discardable cutover attempts. If the team enters data into the old system after freeze, the cutover is invalid and must be restarted from a new final snapshot.
### 8. User Bootstrap
User bootstrap is manual or admin-driven because Better Auth users do not come from NocoDB. All post-migration "create user", "assign role", and "set password email" steps are audited.