1087 lines
112 KiB
Markdown
1087 lines
112 KiB
Markdown
|
|
# Berth recommender, data import, PDF management, and sales-send-out emails — comprehensive plan
|
|||
|
|
|
|||
|
|
**Last updated:** 2026-05-05 (edge-case audit appended as §14)
|
|||
|
|
**Owner:** Matt + Claude (this session)
|
|||
|
|
**Branch:** `feat/berth-recommender` (to be created)
|
|||
|
|
|
|||
|
|
This document is the single source of truth for the multi-feature push that bundles:
|
|||
|
|
|
|||
|
|
1. /clients + /interests list-column fixes (the original bug report)
|
|||
|
|
2. Full NocoDB Berths table import + seeding (CRM becomes the source of truth)
|
|||
|
|
3. Schema refactor — many-to-many interests↔berths with role flags + desired-dimension columns
|
|||
|
|
4. Berth recommender (SQL ranking, tier ladder, heat scoring, UI panel)
|
|||
|
|
5. EOI bundle support — multi-berth EOIs + Documenso range-string formatter
|
|||
|
|
6. Per-berth PDF management — versioned storage + reverse parser
|
|||
|
|
7. Sales send-out emails — berth PDF + brochure flows
|
|||
|
|
8. Public-website cutover — public site reads berths from CRM, NocoDB read path retired
|
|||
|
|
|
|||
|
|
It exists primarily so that if the conversation context is compacted, the next session can pick up without losing any of the design decisions captured here.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 1. Confirmed design decisions
|
|||
|
|
|
|||
|
|
| Decision area | Choice |
|
|||
|
|
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Multi-berth model | Drop `interest.berthId`; everything M:M via `interest_berths` junction; one row marked `is_primary=true`. Cleaner long-term despite the larger refactor. |
|
|||
|
|
| EOI bypass | Implicit-by-default + per-berth override flag. EOI signed on any berth in an interest covers all `is_in_eoi_bundle=true` berths in that interest unless one has `eoi_bypass_reason` set. |
|
|||
|
|
| Multi-berth EOI generation | Documenso template gets a single merge token (e.g. `eoi_berth_range`) populated by a render function that compresses the `is_in_eoi_bundle=true` berth set into a compact string like `"A1-A10, B2-B5"`. The compact string is **only** used inside the Documenso PDF (space-constrained); CRM UI always shows the berths as individual chips. |
|
|||
|
|
| Public-map "under interest" rule | A berth shows as under interest publicly when it has at least one `is_specific_interest=true` link **OR** any open interest with a paid deposit. EOI-bundle-only berths stay "available" on the public map. |
|
|||
|
|
| Recommender trigger | Always-on panel on the interest detail page. Auto-populates recommendations when the interest has desired dimensions but no specific berth(s). Available on every interest, not gated by sub-status. |
|
|||
|
|
| Tier ladder | Per-port admin setting controls the policy; default ladder: A=no interests, B=lost-only history, C=early-stage open interests, D=hidden when in late stage (deposit/contract/signed). Fall-throughs return berths to a higher tier per the fall-through policy. |
|
|||
|
|
| Fall-through policy | Per-port admin setting: immediate-with-heat-flag (default) / cooldown / never-auto-recommend. Configurable cooldown days. |
|
|||
|
|
| Heat signals | All four signals factor in: (1) fall-through recency, (2) furthest stage reached, (3) historical interest count across all clients, (4) historical EOI signature count. Per-port admin can tune weights. |
|
|||
|
|
| Top N recommendations | Top 5–8 with a "show all feasible" expander. |
|
|||
|
|
| Oversize cap | Max 30% larger than desired dimensions, configurable per port. |
|
|||
|
|
| Add-recommendation UI | Dialog at click time: "Pitching specifically" vs "Just exploring". Linked berths in the interest detail get a persistent toggle to switch between sub-statuses, with a clear visual indicator of the public-map consequence ("This berth will appear as under interest on the public map" / "This berth is hidden from the public map"). |
|
|||
|
|
| Amenity matching | Sales rep adds amenity filters per-interest in the recommender panel after speaking with the client. Amenities are not part of the public form. |
|
|||
|
|
| Explainability | Cards collapsed by default; expand-per-card to see tier, size buffer, amenity match reasoning. Progressive disclosure. |
|
|||
|
|
| Berth read path | Cut over: website calls a new `/api/public/berths` endpoint on the CRM. NocoDB read path retired. |
|
|||
|
|
| Per-berth PDF storage | Full versioning. Every upload creates a new version. Current = latest. Roll back to any prior version. |
|
|||
|
|
| PDF↔DB conflict resolution | On upload: parse the PDF; auto-fill any CRM fields that are currently null; for fields where both have values that disagree, show a diff dialog requiring rep confirmation per field. Audit-log every accepted change. |
|
|||
|
|
| PDF parsing approach | 3-tier fallback: (1) AcroForm field reads if the PDF has named form fields, (2) OCR with positional heuristics for flat text PDFs, (3) AI parse as last-resort fallback when OCR confidence is low. AI is opt-in / suggested only — not the primary path. |
|
|||
|
|
| PDF generation direction | External uploads only. CRM does not auto-regenerate per-berth PDFs from a template. |
|
|||
|
|
| Brochure model | Multiple labeled brochures per port + a "default" marker for fast-send. Versioned similarly to per-berth PDFs. |
|
|||
|
|
| Send-button locations | All three: interest detail (per-berth send + bulk + brochure), berth detail (send to any client via recipient picker), client detail (multi-berth bulk + brochure). |
|
|||
|
|
| Send tracking | Full audit: every send creates a timeline entry on both client and interest, including recipient, who sent, timestamp, exact PDF version, custom body text used. |
|
|||
|
|
| Email layout/wording editability | Layout/styling shell is locked (background blur gradient + logo + white box). Body Markdown inside the white box is admin-editable with merge fields (`{{client_name}}`, `{{berth_mooring}}`, `{{port_name}}`, etc.). Reps get a per-send override input above the body for one-off customization. |
|
|||
|
|
| From-address | Configurable per-port: `sales_from_address` and `noreply_from_address` system*settings, with `sales_smtp*\*`credentials encrypted at rest. Defaults:`sales@portnimara.com`for sales-initiated sends,`noreply@portnimara.com` for transactional automation. Future swap to OAuth-per-rep is a config change, not a code change. |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 2. Phase & commit plan (one feature branch, multiple commits)
|
|||
|
|
|
|||
|
|
**Branch:** `feat/berth-recommender`
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
feat: full NocoDB berth import + seed [Phase 0]
|
|||
|
|
fix(clients): list contacts + addresses join + col redesign [Phase 1]
|
|||
|
|
fix(interests): list yacht + desired dims + col redesign [Phase 1]
|
|||
|
|
feat(db): m:m interest_berths junction + role flags [Phase 2]
|
|||
|
|
feat(db): desired dimensions on interests + backfill [Phase 2]
|
|||
|
|
feat(berths): public berths API endpoint [Phase 3]
|
|||
|
|
fix(website): swap getBerths to call CRM endpoint [Phase 3 - website repo]
|
|||
|
|
feat(recommender): SQL ranking + tier ladder + heat [Phase 4]
|
|||
|
|
feat(recommender): UI panel + add-to-interest dialog [Phase 4]
|
|||
|
|
feat(eoi): multi-berth EOI generation + range formatter [Phase 5]
|
|||
|
|
feat(storage): pluggable S3-or-filesystem backend [Phase 6a]
|
|||
|
|
feat(storage): migration CLI + admin UI wrapper [Phase 6a]
|
|||
|
|
feat(berths): per-berth PDF storage (versioned) [Phase 6b]
|
|||
|
|
feat(berths): PDF reverse parser (AcroForm/OCR/AI) [Phase 6b]
|
|||
|
|
feat(emails): send-berth-PDF flow + send-brochure flow [Phase 7]
|
|||
|
|
feat(admin): brochures management UI + send-from settings [Phase 7]
|
|||
|
|
chore: update CLAUDE.md with new conventions [Phase 8]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Each phase is independently testable. Phases 1, 2, 3 don't depend on the recommender so they're safe early wins. Phase 4 depends on Phases 0+2.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 3. Schema changes
|
|||
|
|
|
|||
|
|
### 3.1 New tables
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
// src/lib/db/schema/interests.ts
|
|||
|
|
|
|||
|
|
export const interestBerths = pgTable(
|
|||
|
|
'interest_berths',
|
|||
|
|
{
|
|||
|
|
id: text('id')
|
|||
|
|
.primaryKey()
|
|||
|
|
.$defaultFn(() => crypto.randomUUID()),
|
|||
|
|
interestId: text('interest_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => interests.id, { onDelete: 'cascade' }),
|
|||
|
|
berthId: text('berth_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => berths.id, { onDelete: 'restrict' }),
|
|||
|
|
/** One row per interest is the primary; used in templates / forms / "the berth for this deal" semantics. */
|
|||
|
|
isPrimary: boolean('is_primary').notNull().default(false),
|
|||
|
|
/** True = berth shows as "under interest" on the public map. False = legal/EOI-only. */
|
|||
|
|
isSpecificInterest: boolean('is_specific_interest').notNull().default(true),
|
|||
|
|
/** True = covered by the EOI bundle for this interest. */
|
|||
|
|
isInEoiBundle: boolean('is_in_eoi_bundle').notNull().default(false),
|
|||
|
|
/** Set when EOI is explicitly waived for this berth even though the interest's primary EOI is signed. */
|
|||
|
|
eoiBypassReason: text('eoi_bypass_reason'),
|
|||
|
|
eoiBypassedBy: text('eoi_bypassed_by'), // user id
|
|||
|
|
eoiBypassedAt: timestamp('eoi_bypassed_at', { withTimezone: true }),
|
|||
|
|
addedBy: text('added_by'), // user id
|
|||
|
|
addedAt: timestamp('added_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
notes: text('notes'),
|
|||
|
|
},
|
|||
|
|
(t) => [
|
|||
|
|
uniqueIndex('idx_ib_interest_berth').on(t.interestId, t.berthId),
|
|||
|
|
// Only one primary per interest
|
|||
|
|
uniqueIndex('idx_ib_one_primary')
|
|||
|
|
.on(t.interestId)
|
|||
|
|
.where(sql`${t.isPrimary} = true`),
|
|||
|
|
index('idx_ib_berth').on(t.berthId),
|
|||
|
|
index('idx_ib_specific')
|
|||
|
|
.on(t.berthId)
|
|||
|
|
.where(sql`${t.isSpecificInterest} = true`),
|
|||
|
|
],
|
|||
|
|
);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3.2 Column additions
|
|||
|
|
|
|||
|
|
```sql
|
|||
|
|
-- Interests gain desired-dimensions for the recommender
|
|||
|
|
ALTER TABLE interests
|
|||
|
|
ADD COLUMN desired_length_ft numeric,
|
|||
|
|
ADD COLUMN desired_width_ft numeric,
|
|||
|
|
ADD COLUMN desired_draft_ft numeric;
|
|||
|
|
|
|||
|
|
-- Berths gain pricing fields revealed by the sample PDF, plus a pointer to the current PDF version
|
|||
|
|
-- (full version history lives in berth_pdf_versions).
|
|||
|
|
ALTER TABLE berths
|
|||
|
|
ADD COLUMN weekly_rate_high_usd numeric,
|
|||
|
|
ADD COLUMN weekly_rate_low_usd numeric,
|
|||
|
|
ADD COLUMN daily_rate_high_usd numeric,
|
|||
|
|
ADD COLUMN daily_rate_low_usd numeric,
|
|||
|
|
ADD COLUMN pricing_valid_until date,
|
|||
|
|
ADD COLUMN last_imported_at timestamp with time zone, -- set by NocoDB import script
|
|||
|
|
ADD COLUMN current_pdf_version_id text REFERENCES berth_pdf_versions(id);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Notes:
|
|||
|
|
|
|||
|
|
- The 4 rate columns + `pricing_valid_until` come from the per-berth PDF, not NocoDB. After the Phase 0 NocoDB import these stay null until reps upload PDFs in Phase 6.
|
|||
|
|
- The `last_imported_at` column lets the NocoDB import script implement "do not overwrite if user has manually edited since import" — compare against `updated_at`.
|
|||
|
|
- The pricing-validity date powers a "Pricing data may be stale" warning chip on the berth detail page when `pricing_valid_until < today()`.
|
|||
|
|
|
|||
|
|
### 3.3 More new tables
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
// Per-berth PDF version history
|
|||
|
|
export const berthPdfVersions = pgTable('berth_pdf_versions', {
|
|||
|
|
id: text('id')
|
|||
|
|
.primaryKey()
|
|||
|
|
.$defaultFn(() => crypto.randomUUID()),
|
|||
|
|
berthId: text('berth_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => berths.id, { onDelete: 'cascade' }),
|
|||
|
|
versionNumber: integer('version_number').notNull(), // 1, 2, 3...
|
|||
|
|
s3Key: text('s3_key').notNull(),
|
|||
|
|
fileName: text('file_name').notNull(),
|
|||
|
|
fileSizeBytes: integer('file_size_bytes').notNull(),
|
|||
|
|
contentSha256: text('content_sha256').notNull(),
|
|||
|
|
uploadedBy: text('uploaded_by').notNull(), // user id
|
|||
|
|
uploadedAt: timestamp('uploaded_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
/** Diffs accepted on this upload, captured for audit */
|
|||
|
|
parseResults: jsonb('parse_results'), // { engine: 'acroform'|'ocr'|'ai', extracted: {...}, conflicts: [...], appliedFields: [...] }
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
// Port-wide brochures (multiple per port + a default marker)
|
|||
|
|
export const brochures = pgTable('brochures', {
|
|||
|
|
id: text('id')
|
|||
|
|
.primaryKey()
|
|||
|
|
.$defaultFn(() => crypto.randomUUID()),
|
|||
|
|
portId: text('port_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => ports.id, { onDelete: 'cascade' }),
|
|||
|
|
label: text('label').notNull(), // 'General', 'Investor Pack', etc.
|
|||
|
|
description: text('description'),
|
|||
|
|
isDefault: boolean('is_default').notNull().default(false),
|
|||
|
|
archivedAt: timestamp('archived_at', { withTimezone: true }),
|
|||
|
|
createdBy: text('created_by').notNull(),
|
|||
|
|
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
export const brochureVersions = pgTable('brochure_versions', {
|
|||
|
|
id: text('id')
|
|||
|
|
.primaryKey()
|
|||
|
|
.$defaultFn(() => crypto.randomUUID()),
|
|||
|
|
brochureId: text('brochure_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => brochures.id, { onDelete: 'cascade' }),
|
|||
|
|
versionNumber: integer('version_number').notNull(),
|
|||
|
|
s3Key: text('s3_key').notNull(),
|
|||
|
|
fileName: text('file_name').notNull(),
|
|||
|
|
fileSizeBytes: integer('file_size_bytes').notNull(),
|
|||
|
|
contentSha256: text('content_sha256').notNull(),
|
|||
|
|
uploadedBy: text('uploaded_by').notNull(),
|
|||
|
|
uploadedAt: timestamp('uploaded_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
});
|
|||
|
|
|
|||
|
|
// Send-out audit log for berth PDFs and brochures
|
|||
|
|
export const documentSends = pgTable(
|
|||
|
|
'document_sends',
|
|||
|
|
{
|
|||
|
|
id: text('id')
|
|||
|
|
.primaryKey()
|
|||
|
|
.$defaultFn(() => crypto.randomUUID()),
|
|||
|
|
portId: text('port_id')
|
|||
|
|
.notNull()
|
|||
|
|
.references(() => ports.id),
|
|||
|
|
/** Either client_id or interest_id is set (or both) */
|
|||
|
|
clientId: text('client_id').references(() => clients.id),
|
|||
|
|
interestId: text('interest_id').references(() => interests.id),
|
|||
|
|
recipientEmail: text('recipient_email').notNull(),
|
|||
|
|
documentKind: text('document_kind').notNull(), // 'berth_pdf' | 'brochure'
|
|||
|
|
berthId: text('berth_id').references(() => berths.id), // when documentKind='berth_pdf'
|
|||
|
|
berthPdfVersionId: text('berth_pdf_version_id').references(() => berthPdfVersions.id),
|
|||
|
|
brochureId: text('brochure_id').references(() => brochures.id), // when documentKind='brochure'
|
|||
|
|
brochureVersionId: text('brochure_version_id').references(() => brochureVersions.id),
|
|||
|
|
bodyMarkdown: text('body_markdown'), // exact body used (after merge-field expansion)
|
|||
|
|
sentByUserId: text('sent_by_user_id').notNull(),
|
|||
|
|
fromAddress: text('from_address').notNull(), // resolved sales@ or per-rep
|
|||
|
|
sentAt: timestamp('sent_at', { withTimezone: true }).notNull().defaultNow(),
|
|||
|
|
/** SMTP provider message-id for deliverability tracking */
|
|||
|
|
messageId: text('message_id'),
|
|||
|
|
/** When the initial send had its attachment dropped because the SMTP server
|
|||
|
|
* rejected the size (552 etc.) and the system retried with a download
|
|||
|
|
* link, this captures the rejection reason for ops visibility. Null when
|
|||
|
|
* the original send went through as-is. */
|
|||
|
|
fallbackToLinkReason: text('fallback_to_link_reason'),
|
|||
|
|
},
|
|||
|
|
(t) => [
|
|||
|
|
index('idx_ds_client').on(t.clientId, t.sentAt),
|
|||
|
|
index('idx_ds_interest').on(t.interestId, t.sentAt),
|
|||
|
|
index('idx_ds_berth').on(t.berthId, t.sentAt),
|
|||
|
|
],
|
|||
|
|
);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3.4 Removals / renames
|
|||
|
|
|
|||
|
|
```sql
|
|||
|
|
-- After migrating existing interest.berthId values into interest_berths, drop the column.
|
|||
|
|
-- This is an irreversible schema change so it goes in a separate migration AFTER all callers are updated.
|
|||
|
|
ALTER TABLE interests DROP COLUMN berth_id;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 4. Service layer
|
|||
|
|
|
|||
|
|
### 4.1 NocoDB berth import (Phase 0)
|
|||
|
|
|
|||
|
|
`scripts/import-berths-from-nocodb.ts`:
|
|||
|
|
|
|||
|
|
- Fetches all rows from NocoDB Berths (table id `mczgos9hr3oa9qc`).
|
|||
|
|
- Maps every column to the corresponding new-CRM `berths` column. The new schema already has every NocoDB column.
|
|||
|
|
- Upserts by `mooring_number` + `port_id`. Existing rows get updated (preserving CRM-side overrides via a "do not overwrite if user has manually edited since import" guard, tracked via a `last_imported_at` column we add to berths in this same phase).
|
|||
|
|
- Reproducible: re-running the script picks up NocoDB additions/edits without clobbering CRM-side changes.
|
|||
|
|
- Also feeds `src/lib/db/seed-data.ts` so fresh installs get the data.
|
|||
|
|
|
|||
|
|
### 4.2 listClients fix (Phase 1)
|
|||
|
|
|
|||
|
|
`src/lib/services/clients.service.ts` — add joins:
|
|||
|
|
|
|||
|
|
- `clientContacts` (primary contact only, ordered by `is_primary desc, created_at desc`).
|
|||
|
|
- `clientAddresses` (primary address only).
|
|||
|
|
- One backfill SQL that sets `clients.nationality_iso = (subquery: primary phone's value_country)` where `nationality_iso IS NULL`.
|
|||
|
|
|
|||
|
|
New `ClientRow` shape includes `primaryEmail`, `primaryPhone`, `countryIso`, `latestInterest { stage, mooringNumber }`.
|
|||
|
|
|
|||
|
|
### 4.3 listInterests fix (Phase 1)
|
|||
|
|
|
|||
|
|
`src/lib/services/interests.service.ts` — add joins:
|
|||
|
|
|
|||
|
|
- `yachts` for the linked yacht name.
|
|||
|
|
- New columns `desired_length_ft / width_ft / draft_ft` rendered as a compact "60×18×6 ft" string in a `Berth size desired` column.
|
|||
|
|
|
|||
|
|
### 4.4 Recommender (Phase 4)
|
|||
|
|
|
|||
|
|
`src/lib/services/berth-recommender.service.ts`:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
type RecommendBerthsArgs = {
|
|||
|
|
interestId: string;
|
|||
|
|
portId: string;
|
|||
|
|
// Optional rep-supplied filters
|
|||
|
|
amenityFilters?: {
|
|||
|
|
minPowerCapacityKw?: number;
|
|||
|
|
requiredVoltage?: number;
|
|||
|
|
requiredAccess?: string;
|
|||
|
|
requiredMooringType?: string;
|
|||
|
|
requiredCleatCapacity?: string;
|
|||
|
|
};
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
type Recommendation = {
|
|||
|
|
berthId: string;
|
|||
|
|
mooringNumber: string;
|
|||
|
|
tier: 'A' | 'B' | 'C' | 'D';
|
|||
|
|
fitScore: number; // 0-100
|
|||
|
|
sizeBufferPct: number; // how much larger than desired
|
|||
|
|
heatScore: number; // 0-100, only relevant for fall-throughs
|
|||
|
|
reasons: {
|
|||
|
|
dimensional: string;
|
|||
|
|
pipeline: string;
|
|||
|
|
amenities?: string;
|
|||
|
|
heat?: string;
|
|||
|
|
};
|
|||
|
|
// Display data
|
|||
|
|
lengthFt: number;
|
|||
|
|
widthFt: number;
|
|||
|
|
draftFt: number;
|
|||
|
|
status: string;
|
|||
|
|
amenities: {
|
|||
|
|
/* power, voltage, access, mooring_type, etc. */
|
|||
|
|
};
|
|||
|
|
};
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Algorithm (single SQL CTE chain, ~50 lines):
|
|||
|
|
|
|||
|
|
1. **Feasible set** — berths where `length_ft >= desired_length`, `width_ft >= desired_width`, `draft_ft >= desired_draft`, and not exceeding the per-port max-oversize-pct cap.
|
|||
|
|
2. **Apply amenity filters** — hard-filter by required amenities.
|
|||
|
|
3. **Tier classification** — left join `interest_berths` aggregates: count of active interests, max stage reached, count of EOI signatures, latest fall-through date. Apply the per-port tier ladder.
|
|||
|
|
4. **Heat score** — for berths that have fall-through history, compute a heat score using the per-port heat weights.
|
|||
|
|
5. **Fit score** — combination of `1 / size_buffer_pct` (closer fit better), tier rank, heat (higher = boosted in late tiers).
|
|||
|
|
6. **Sort + top-N** — sort by tier asc, fit score desc; return top 8 or all-feasible per panel mode.
|
|||
|
|
|
|||
|
|
### 4.5 Public berths API (Phase 3)
|
|||
|
|
|
|||
|
|
`src/app/api/public/berths/route.ts`:
|
|||
|
|
|
|||
|
|
- GET endpoint, no auth (public-facing).
|
|||
|
|
- 5-minute response cache (matching the existing website's behavior).
|
|||
|
|
- Returns the same shape NocoDB returned to the website (so the website's getBerths() swap is a one-line change to the URL + auth header).
|
|||
|
|
- Filters out berths archived in CRM.
|
|||
|
|
- Status mapping: `under_offer` ↔ at-least-one `is_specific_interest=true` link OR paid deposit; `sold` ↔ `status='sold'`; else `available`.
|
|||
|
|
|
|||
|
|
Website-side change (separate repo `Port Nimara/Website`), env-configurable:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
// server/utils/berths.ts
|
|||
|
|
const CRM_PUBLIC_URL = process.env.CRM_PUBLIC_URL;
|
|||
|
|
// Production: https://crm.portnimara.com
|
|||
|
|
// Staging: https://crm-staging.portnimara.com
|
|||
|
|
// Dev: http://localhost:3000
|
|||
|
|
|
|||
|
|
if (!CRM_PUBLIC_URL) throw new Error('CRM_PUBLIC_URL must be set');
|
|||
|
|
|
|||
|
|
export const getBerths = () =>
|
|||
|
|
$fetch<NocoDbListResult<Berth>>(`${CRM_PUBLIC_URL}/api/public/berths`);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
`.env.example` on the website repo gains a `CRM_PUBLIC_URL=` line. The previous NocoDB read path is deleted in the same commit. Once the website is in production reading from the CRM, the NocoDB Berths table can be retired.
|
|||
|
|
|
|||
|
|
### 4.6 EOI bundle range formatter (Phase 5)
|
|||
|
|
|
|||
|
|
`src/lib/templates/berth-range.ts`:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
/**
|
|||
|
|
* Compresses a list of mooring numbers like ['A1', 'A2', 'A3', 'B2', 'B3']
|
|||
|
|
* into 'A1-A3, B2-B3'. Single-berth runs become bare ('A5', not 'A5-A5').
|
|||
|
|
* Used only by the Documenso EOI template merge — UI elsewhere shows
|
|||
|
|
* berths as individual chips.
|
|||
|
|
*/
|
|||
|
|
export function formatBerthRange(mooringNumbers: string[]): string;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Unit-tested against:
|
|||
|
|
|
|||
|
|
- `[]` → `""`
|
|||
|
|
- `['A5']` → `"A5"`
|
|||
|
|
- `['A1', 'A2', 'A3']` → `"A1-A3"`
|
|||
|
|
- `['A1', 'A3']` → `"A1, A3"`
|
|||
|
|
- `['A1', 'A2', 'B5', 'B6', 'B7']` → `"A1-A2, B5-B7"`
|
|||
|
|
- Mixed letter-prefixes, non-numeric suffixes, etc.
|
|||
|
|
|
|||
|
|
Documenso payload merge: `eoi_berth_range` token populated via this formatter, included in `src/lib/templates/merge-fields.ts` allow-list.
|
|||
|
|
|
|||
|
|
### 4.7a Pluggable storage backend (Phase 6a)
|
|||
|
|
|
|||
|
|
The CRM stores files (per-berth PDFs, brochures, GDPR exports, etc.) through a single abstraction so the deployment can choose between S3-compatible (MinIO/AWS S3/Backblaze B2/Cloudflare R2/Wasabi/Tigris) and local filesystem at runtime.
|
|||
|
|
|
|||
|
|
**Why not "MinIO off → into Postgres"** (the originally-asked option): a 20MB brochure stored as Postgres `bytea` bloats the WAL, balloons backups, can't be streamed by `postgres-js`, and can't be served via CDN. The pluggable filesystem option achieves the "no MinIO required" goal without any of those penalties.
|
|||
|
|
|
|||
|
|
**Storage interface** (`src/lib/storage/index.ts`):
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
export interface StorageBackend {
|
|||
|
|
/** Upload a stream/buffer to the backend. Returns the storage key. */
|
|||
|
|
put(
|
|||
|
|
key: string,
|
|||
|
|
body: Buffer | NodeJS.ReadableStream,
|
|||
|
|
opts: PutOpts,
|
|||
|
|
): Promise<{ key: string; sizeBytes: number; sha256: string }>;
|
|||
|
|
/** Stream a file out. Throws NotFoundError if missing. */
|
|||
|
|
get(key: string): Promise<NodeJS.ReadableStream>;
|
|||
|
|
/** HEAD-equivalent: existence + size check without reading body. */
|
|||
|
|
head(key: string): Promise<{ sizeBytes: number; contentType: string } | null>;
|
|||
|
|
/** Delete. Idempotent. */
|
|||
|
|
delete(key: string): Promise<void>;
|
|||
|
|
/** Generate a short-lived URL for the browser to upload to (S3 only; filesystem backend returns a CRM-internal URL that proxies the upload). */
|
|||
|
|
presignUpload(key: string, opts: PresignOpts): Promise<{ url: string; method: 'PUT' | 'POST' }>;
|
|||
|
|
/** Generate a short-lived URL for downloads (S3 = signed URL; filesystem = CRM-internal proxy URL with HMAC token). */
|
|||
|
|
presignDownload(key: string, opts: PresignOpts): Promise<{ url: string; expiresAt: Date }>;
|
|||
|
|
/** Backend-specific identifier for telemetry/admin display. */
|
|||
|
|
readonly name: 's3' | 'filesystem';
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Two implementations:**
|
|||
|
|
|
|||
|
|
| Backend | Use case | Notes |
|
|||
|
|
| ------------------- | --------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| `S3Backend` | MinIO, AWS S3, B2, R2, Wasabi, Tigris (any S3-compatible) | Reuses existing `src/lib/minio/index.ts` code. Configured via `system_settings`: `storage_s3_endpoint`, `_region`, `_bucket`, `_access_key`, `_secret_key_encrypted`, `_force_path_style`. |
|
|||
|
|
| `FilesystemBackend` | Single-VPS deployments, dev | Stores files at `${STORAGE_FILESYSTEM_ROOT}/<key>`. Default root: `./storage` (gitignored). Presigned URLs are CRM-internal `/api/storage/[token]` routes that verify HMAC + serve the file inline. |
|
|||
|
|
|
|||
|
|
**Factory function** (`src/lib/storage/index.ts`):
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
export async function getStorageBackend(): Promise<StorageBackend> {
|
|||
|
|
const setting = await getSystemSetting('storage_backend'); // 's3' | 'filesystem'
|
|||
|
|
if (setting === 'filesystem') return new FilesystemBackend(/*...config*/);
|
|||
|
|
return new S3Backend(/*...config*/); // default
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
The factory is cached per-process for the duration of the request; settings changes invalidate via the existing `system_settings` cache invalidation path (Redis pub/sub).
|
|||
|
|
|
|||
|
|
**Migration command** (`scripts/migrate-storage.ts`):
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
# Dry run — reports which files would be moved + total bytes, doesn't transfer
|
|||
|
|
pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem --dry-run
|
|||
|
|
|
|||
|
|
# Actual migration
|
|||
|
|
pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem
|
|||
|
|
pnpm tsx scripts/migrate-storage.ts --from filesystem --to s3
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
The script:
|
|||
|
|
|
|||
|
|
1. Locks via `pg_advisory_lock(STORAGE_MIGRATION_LOCK_KEY)` so two runs can't conflict.
|
|||
|
|
2. Walks every file-referencing table (`berth_pdf_versions.s3_key`, `brochure_versions.s3_key`, anywhere else MinIO is used today).
|
|||
|
|
3. For each row: streams the file from source backend → uploads to target backend with the same key. Verifies sha256 matches afterward.
|
|||
|
|
4. Once all files are confirmed in the target, updates `system_settings.storage_backend = '<target>'` atomically.
|
|||
|
|
5. Old backend is left intact for 24h so a rollback is trivial; daily orphan-cleanup worker eventually removes after confirmation.
|
|||
|
|
6. Resumable — re-running picks up where it left off via per-row `migrated_at` markers in a temp table.
|
|||
|
|
|
|||
|
|
**Admin UI** (`/[portSlug]/admin/storage`, super_admin only):
|
|||
|
|
|
|||
|
|
- Current backend display + capacity stats (file count, total bytes, oldest file)
|
|||
|
|
- "Switch backend" button → confirmation modal with dry-run output (count + bytes to transfer + estimated time)
|
|||
|
|
- Migration runs as a background job; UI polls progress (Socket.IO event)
|
|||
|
|
- Post-migration banner: "Switched from S3 to Filesystem. Old S3 bucket retained for 24h; click here to clean up early."
|
|||
|
|
- The button is a thin wrapper around the CLI script — same code path, different trigger.
|
|||
|
|
|
|||
|
|
**Schema changes:**
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
// All existing file-reference columns are already named *_s3_key but that's
|
|||
|
|
// a misnomer once filesystem mode exists. Rename in this phase:
|
|||
|
|
// berth_pdf_versions.s3_key -> berth_pdf_versions.storage_key
|
|||
|
|
// brochure_versions.s3_key -> brochure_versions.storage_key
|
|||
|
|
// (no production data, so a one-shot ALTER TABLE rename is safe per §11.2)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4.7b PDF management (Phase 6b)
|
|||
|
|
|
|||
|
|
`src/lib/services/berth-pdf.service.ts`:
|
|||
|
|
|
|||
|
|
- `uploadBerthPdf(berthId, file, parseResult)` — stores file via the active `StorageBackend` at key `berths/{berthId}/v{n}/{filename}`, increments version counter, sets `berths.current_pdf_version_id`.
|
|||
|
|
- `parseBerthPdf(buffer)` — 3-tier:
|
|||
|
|
1. Try AcroForm read via `pdf-lib`. If named fields exist (`length_ft`, `mooring_number`, etc.), return extracted values + `engine: 'acroform'`.
|
|||
|
|
2. Else fall back to OCR via Tesseract.js (already in the codebase). Use positional heuristics keyed off label patterns ("Length: 82ft", "Mooring: A12"). Return `engine: 'ocr'`, with confidence per field.
|
|||
|
|
3. If OCR confidence is below a threshold for too many fields, surface an "AI parse" button to the rep that calls the OpenAI extraction path (already optionally configured via `OPENAI_API_KEY`). Return `engine: 'ai'`.
|
|||
|
|
- `reconcilePdfWithBerth(berthId, parsedFields)` — for each parsed field: if CRM is null, auto-set; if CRM and PDF disagree on a non-null value, return as a `conflict` for the diff dialog.
|
|||
|
|
|
|||
|
|
### 4.8 Sales send-outs (Phase 7)
|
|||
|
|
|
|||
|
|
`src/lib/services/document-sends.service.ts`:
|
|||
|
|
|
|||
|
|
- `sendBerthPdf({ berthId, recipient: { clientId|email, interestId? }, customBodyMarkdown? })` — resolves the rep's body template (per-port editable default) + per-send override, runs merge expansion, attaches the berth's current PDF version, sends from `sales_from_address`, logs to `document_sends`.
|
|||
|
|
- `sendBrochure({ brochureId?, recipient, customBodyMarkdown? })` — similar, defaults to the port's default brochure when `brochureId` omitted.
|
|||
|
|
- Email send goes through the existing `email/index.ts` infrastructure with a new `sender_account` parameter that picks between `noreply` and `sales` SMTP credentials.
|
|||
|
|
|
|||
|
|
Rate-limited: each user can send at most N brochures + M berth PDFs per hour to prevent accidental mass-blasts.
|
|||
|
|
|
|||
|
|
### 4.9 Per-port settings (Phase 7)
|
|||
|
|
|
|||
|
|
New keys in `system_settings`:
|
|||
|
|
|
|||
|
|
| Key | Type | Default | Notes |
|
|||
|
|
| ------------------------------------ | ---- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
|||
|
|
| `recommender_max_oversize_pct` | int | 30 | |
|
|||
|
|
| `recommender_top_n_default` | int | 8 | |
|
|||
|
|
| `fallthrough_policy` | text | `'immediate_with_heat'` | One of `immediate_with_heat`, `cooldown`, `never_auto_recommend` |
|
|||
|
|
| `fallthrough_cooldown_days` | int | 30 | Used when policy=cooldown |
|
|||
|
|
| `heat_weight_recency` | int | 30 | 0-100 |
|
|||
|
|
| `heat_weight_furthest_stage` | int | 40 | |
|
|||
|
|
| `heat_weight_interest_count` | int | 15 | |
|
|||
|
|
| `heat_weight_eoi_count` | int | 15 | |
|
|||
|
|
| `tier_ladder_hide_late_stage` | bool | true | If false, late-stage berths show as Tier D dimmed |
|
|||
|
|
| `sales_from_address` | text | `sales@portnimara.com` | |
|
|||
|
|
| `sales_smtp_host` | text | — | Provider-agnostic: e.g. `smtp.gmail.com` (Workspace) or `smtp.office365.com` (M365). |
|
|||
|
|
| `sales_smtp_port` | int | 587 | 465 for SSL, 587 for STARTTLS. |
|
|||
|
|
| `sales_smtp_secure` | bool | false | true=SSL on 465, false=STARTTLS on 587. |
|
|||
|
|
| `sales_smtp_user` | text | — | Usually the same as `sales_from_address`. |
|
|||
|
|
| `sales_smtp_pass_encrypted` | text | — | App password from the provider, encrypted at rest. |
|
|||
|
|
| `sales_imap_host` | text | — | Required for bounce monitoring. e.g. `imap.gmail.com` or `outlook.office365.com`. |
|
|||
|
|
| `sales_imap_port` | int | 993 | Standard IMAPS for both providers. |
|
|||
|
|
| `sales_imap_user` | text | — | Usually same as SMTP user; can be a dedicated bounce-handler mailbox. |
|
|||
|
|
| `sales_imap_pass_encrypted` | text | — | App password, encrypted at rest. |
|
|||
|
|
| `sales_auth_method` | text | `app_password` | Future: `oauth_google` / `oauth_microsoft` when OAuth migration happens. Out of scope for this branch. |
|
|||
|
|
| `noreply_from_address` | text | `noreply@portnimara.com` | Existing |
|
|||
|
|
| `email_template_send_berth_pdf_body` | text | (default markdown) | Admin-editable per-port |
|
|||
|
|
| `email_template_send_brochure_body` | text | (default markdown) | Admin-editable per-port |
|
|||
|
|
| `brochure_max_upload_mb` | int | 50 | Per-port cap on brochure file size |
|
|||
|
|
| `berth_pdf_max_upload_mb` | int | 15 | Per-port cap on per-berth PDF file size |
|
|||
|
|
| `email_attach_threshold_mb` | int | 15 | Files larger than this go as a download link instead of attachment. Auto-fallback to link still applies on SMTP size-rejection regardless. |
|
|||
|
|
|
|||
|
|
**Global settings** (not per-port — applies system-wide, super_admin only):
|
|||
|
|
|
|||
|
|
| Key | Type | Default | Notes |
|
|||
|
|
| ------------------------------------- | ---- | ----------- | ------------------------------------------------------------------------------------------------- |
|
|||
|
|
| `storage_backend` | text | `'s3'` | One of `'s3'` (MinIO/AWS/B2/R2/etc.) or `'filesystem'`. Drives the `getStorageBackend()` factory. |
|
|||
|
|
| `storage_s3_endpoint` | text | — | e.g. `https://s3.amazonaws.com`, `http://minio:9000`, `https://s3.us-west-002.backblazeb2.com` |
|
|||
|
|
| `storage_s3_region` | text | `us-east-1` | |
|
|||
|
|
| `storage_s3_bucket` | text | — | |
|
|||
|
|
| `storage_s3_access_key` | text | — | Encrypted at rest. |
|
|||
|
|
| `storage_s3_secret_key_encrypted` | text | — | Encrypted at rest. |
|
|||
|
|
| `storage_s3_force_path_style` | bool | true | Required for MinIO; false for AWS S3. |
|
|||
|
|
| `storage_filesystem_root` | text | `./storage` | Path on disk; relative paths resolve from process cwd. |
|
|||
|
|
| `storage_proxy_hmac_secret_encrypted` | text | — | Encrypted. Used to sign filesystem proxy download URLs. |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 5. UI components
|
|||
|
|
|
|||
|
|
### 5.1 /clients column redesign (Phase 1)
|
|||
|
|
|
|||
|
|
New columns: **Name · Email · Phone · Country · Source · Latest stage · Created**. Drop unused Yachts/Companies/Tags from the default view; they remain available via the column-picker in saved views.
|
|||
|
|
|
|||
|
|
### 5.2 /interests column redesign (Phase 1)
|
|||
|
|
|
|||
|
|
New columns: **Client · Yacht · Berth (primary mooring) · Berth size desired · Stage · EOI status · Source · Last activity**. Drop the Category column.
|
|||
|
|
|
|||
|
|
### 5.3 Recommender panel (Phase 4)
|
|||
|
|
|
|||
|
|
`src/components/interests/berth-recommender-panel.tsx`:
|
|||
|
|
|
|||
|
|
- Always-visible card on the interest detail page.
|
|||
|
|
- Header: "Recommendations for {interest desired dims}" + refresh button + "Add filters" toggle (opens amenity filters input).
|
|||
|
|
- Body: top-8 recommendations as cards, each with mooring number, dimensions, status pill, tier label, heat indicator chip if relevant.
|
|||
|
|
- Cards collapsed by default; click to expand reasoning (size buffer %, pipeline tier, amenity matches, heat score breakdown).
|
|||
|
|
- Each card has an "Add to interest" primary button (opens the role-picker dialog) + "View berth" secondary link.
|
|||
|
|
- "Show all feasible" expander at the bottom.
|
|||
|
|
|
|||
|
|
### 5.4 Add-to-interest dialog (Phase 4)
|
|||
|
|
|
|||
|
|
`src/components/interests/add-berth-to-interest-dialog.tsx`:
|
|||
|
|
|
|||
|
|
- Two large radio cards: "Pitching specifically" (sets `is_specific_interest=true`) vs "Just exploring" (sets `is_specific_interest=false`).
|
|||
|
|
- Below each: clear consequence text — "This berth will appear as under interest on the public map" / "This berth is hidden from the public map".
|
|||
|
|
- Submit button: "Add berth to interest".
|
|||
|
|
|
|||
|
|
### 5.5 Linked-berths list (Phase 4)
|
|||
|
|
|
|||
|
|
In the interest detail's "Berths" tab:
|
|||
|
|
|
|||
|
|
- Each linked berth row has a toggle/icon-button group showing current sub-status with the same consequence text.
|
|||
|
|
- "Set as primary" action when not primary.
|
|||
|
|
- "Mark in EOI bundle" toggle.
|
|||
|
|
- "Bypass EOI for this berth" with reason textarea (only visible if interest has a signed primary EOI).
|
|||
|
|
|
|||
|
|
### 5.6 Berth detail page (Phase 6, 7)
|
|||
|
|
|
|||
|
|
- New "Documents" tab with the per-berth PDF section: current version preview, version history list, "Replace PDF" upload button.
|
|||
|
|
- Upload triggers the parse → reconcile diff flow.
|
|||
|
|
- "Send to client" button (Phase 7) opens a recipient picker + body composer.
|
|||
|
|
|
|||
|
|
### 5.7 Client detail page (Phase 7)
|
|||
|
|
|
|||
|
|
- New "Send documents" action in the client header: opens a multi-step flow where rep picks (a) which berth PDFs (or none), (b) which brochure (or none), (c) custom body.
|
|||
|
|
|
|||
|
|
### 5.8 Admin: brochures management (Phase 7)
|
|||
|
|
|
|||
|
|
- New `/[portSlug]/admin/brochures` page: list of brochures, upload new, mark default, archive, version history.
|
|||
|
|
- Per-port admin scope.
|
|||
|
|
|
|||
|
|
### 5.9 Admin: send-from settings (Phase 7)
|
|||
|
|
|
|||
|
|
- Section in `/[portSlug]/admin/email`: configure `sales_from_address` + SMTP credentials, `noreply_from_address`. Test-send button. Body-template editors for `email_template_send_berth_pdf_body` and `email_template_send_brochure_body`.
|
|||
|
|
|
|||
|
|
### 5.10 Admin: storage backend (Phase 6a)
|
|||
|
|
|
|||
|
|
- New `/admin/storage` page (super_admin only — not per-port):
|
|||
|
|
- **Current backend** card: shows whether `s3` or `filesystem` is active; for s3 shows endpoint/bucket; for filesystem shows the resolved absolute root path.
|
|||
|
|
- **Capacity stats**: file count, total bytes, oldest file timestamp.
|
|||
|
|
- **Switch backend** button: opens confirmation modal with dry-run output (count + bytes to transfer + estimated time). Migration runs as a background job; UI polls progress via Socket.IO.
|
|||
|
|
- **Test connection** button (s3 mode only): attempts a list/put/get/delete on a sentinel object; surfaces errors immediately.
|
|||
|
|
- **Backup hint** sidebar: contextual notes — s3 mode says "Configure your provider's lifecycle/replication"; filesystem mode says "Include `${root}` in your backup tooling."
|
|||
|
|
- Wraps `pnpm tsx scripts/migrate-storage.ts`; no UI logic that the CLI doesn't also support.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 6. Data migration / backfill
|
|||
|
|
|
|||
|
|
| Backfill step | Source | Target | Notes |
|
|||
|
|
| --------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
|
|||
|
|
| Berth data | NocoDB `mczgos9hr3oa9qc` | `berths` table | Phase 0; idempotent script. |
|
|||
|
|
| `interest_berths` | `interests.berth_id` (existing nullable col) | `interest_berths` rows with `is_primary=true, is_specific_interest=true, is_in_eoi_bundle=(eoi_status='signed')` | Phase 2; one-shot SQL. |
|
|||
|
|
| `clients.nationality_iso` | Primary phone's `client_contacts.value_country` | `clients.nationality_iso` | Phase 1; ~210 rows. |
|
|||
|
|
| `interests.desired_length_ft / width_ft / draft_ft` | NocoDB `Length`, `Width`, `Depth` text fields parsed numerically | New numeric columns | Phase 2; via `migration-transform.ts`. |
|
|||
|
|
| Berth PDFs | Existing PDFs from the old NocoDB (none stored — these come from external designers via fresh upload) | `berth_pdf_versions` | Manual; reps upload fresh PDFs over time. |
|
|||
|
|
| Brochures | Existing brochure files | `brochures` + `brochure_versions` | Manual; admin uploads in the new admin UI. |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 7. Cross-repo work
|
|||
|
|
|
|||
|
|
### 7.1 Mooring-number canonical format — `"A1"`, not `"A-01"`
|
|||
|
|
|
|||
|
|
**Discovered drift (2026-05-05):** the CRM currently stores mooring numbers as `"A-01" .. "E-18"` (hyphen + zero-padded). NocoDB, the public website, the per-berth PDFs, and every external reference use `"A1" .. "E18"` (no hyphen, no padding). The old `scripts/load-berths-to-port-nimara.ts` introduced the wrong format when it seeded the CRM from a hand-rolled snapshot.
|
|||
|
|
|
|||
|
|
**Canonical rule going forward:**
|
|||
|
|
|
|||
|
|
- Letter prefix immediately followed by digits, no hyphen, no leading zeros: `A1`, `A11`, `B5`, `E18`.
|
|||
|
|
- The CRM stores this exact form. The website displays this exact form. The Documenso EOI templates render this exact form. Search/lookup is exact-match on this form.
|
|||
|
|
- Multi-letter prefixes are theoretically possible (e.g. `AA1` if the port ever expands) — the validation regex accepts `^[A-Z]+\d+$`.
|
|||
|
|
|
|||
|
|
**One-shot backfill SQL** (Phase 0, runs before the NocoDB import re-imports data):
|
|||
|
|
|
|||
|
|
```sql
|
|||
|
|
UPDATE berths
|
|||
|
|
SET mooring_number = regexp_replace(mooring_number, '^([A-Z]+)-0*(\d+)$', '\1\2')
|
|||
|
|
WHERE mooring_number ~ '^[A-Z]+-0*\d+$';
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Code to update or delete:**
|
|||
|
|
|
|||
|
|
- `scripts/load-berths-to-port-nimara.ts` — DELETE in Phase 0 (superseded by the NocoDB import script).
|
|||
|
|
- Any tests / fixtures / hard-coded references in the codebase using `'A-01'` style — grep + fix during Phase 0.
|
|||
|
|
- The new NocoDB import script preserves `"Mooring Number"` verbatim — no transformation.
|
|||
|
|
|
|||
|
|
### 7.2 Website-side cutover
|
|||
|
|
|
|||
|
|
The public website currently reads three things from NocoDB Berths and writes one. After cutover:
|
|||
|
|
|
|||
|
|
| Website file | Current behavior | After cutover |
|
|||
|
|
| ---------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| `server/utils/berths.ts` → `getBerths()` | Lists all berths from NocoDB table `mczgos9hr3oa9qc` | Calls `${CRM_PUBLIC_URL}/api/public/berths` |
|
|||
|
|
| `server/utils/berths.ts` → `getBerthByMooringNumber(num)` | Fetches one berth by `Mooring Number` from NocoDB | Calls `${CRM_PUBLIC_URL}/api/public/berths/${num}` |
|
|||
|
|
| `server/utils/berths.ts` → `linkWebsiteInterestToBerth(berthId, interestId)` | Writes a NocoDB M:M link between berth and website-interest row | Removed — interest↔berth association already happens via the CRM website-intake dual-write (`submission_id` + `port_slug` carry the linkage; the CRM creates the proper `interest_berths` row) |
|
|||
|
|
| `server/utils/email.ts` line 92 | Comment says `// e.g. "Berth A-12"` | Update comment to `// e.g. "Berth A12"` (cosmetic correctness) |
|
|||
|
|
| `server/api/berths.ts` | `defineEventHandler(async () => (await getBerths()).list)` — public berth list endpoint serving the public berth-map | No change to handler; it just calls the updated `getBerths()` |
|
|||
|
|
| `server/api/berth.ts` | `defineEventHandler((event) => getBerthByMooringNumber(getQuery(event).number as string))` — single berth lookup | No change to handler |
|
|||
|
|
| `server/api/register.ts` | Calls `getBerthByMooringNumber` + `linkWebsiteInterestToBerth` after NocoDB write | Drop the `linkWebsiteInterestToBerth` call (CRM handles linkage); keep `getBerthByMooringNumber` as a sanity-check (returns 200 with warning when berth not found) |
|
|||
|
|
|
|||
|
|
**Vue components that consume the Berth shape (no change needed if shape is preserved):**
|
|||
|
|
|
|||
|
|
- `components/pn/specific/website/berths/map/item.vue` — reads `berth["Mooring Number"]` for the map label
|
|||
|
|
- `components/pn/specific/website/berths/filter/results.vue` — table cell + nuxt-link to `/berths/${mooring}`
|
|||
|
|
- `components/pn/specific/website/berths-item/introduction.vue` — page title for `/berths/[number]`
|
|||
|
|
- `components/pn/specific/website/berths-item/form.vue` — passes the mooring number through to `register.ts` as the `berth` field
|
|||
|
|
- `components/pn/specific/website/supplement-eoi/form.vue` — maps to mooring numbers for the supplemental form
|
|||
|
|
|
|||
|
|
### 7.3 CRM public endpoint — shape contract
|
|||
|
|
|
|||
|
|
The CRM `/api/public/berths` and `/api/public/berths/[mooringNumber]` endpoints **must** return the same JSON shape NocoDB returned, so the website's `Berth` type (`utils/data.ts`) and all Vue templates work without modification. Field names use the NocoDB-style capital-letter quoted keys:
|
|||
|
|
|
|||
|
|
```ts
|
|||
|
|
interface Berth {
|
|||
|
|
Id: number; // CRM uses string UUIDs internally; expose a stable numeric id derived from import order, OR migrate the website's Berth.Id to string
|
|||
|
|
'Mooring Number': string;
|
|||
|
|
Length: number; // ft
|
|||
|
|
Draft: number; // ft (boat draught requirement)
|
|||
|
|
'Side Pontoon': string;
|
|||
|
|
'Power Capacity': number; // kW
|
|||
|
|
Voltage: number; // V
|
|||
|
|
Status: 'Available' | 'Under Offer' | 'Sold'; // mapped from CRM internal status
|
|||
|
|
Width: number; // ft
|
|||
|
|
Area: string;
|
|||
|
|
'Mooring Type': string;
|
|||
|
|
'Bow Facing': string;
|
|||
|
|
'Cleat Type': string;
|
|||
|
|
'Cleat Capacity': string;
|
|||
|
|
'Bollard Type': string;
|
|||
|
|
'Bollard Capacity': string;
|
|||
|
|
'Nominal Boat Size': number;
|
|||
|
|
Access: string;
|
|||
|
|
'Map Data'?: { path: string; x: string; y: string; transform: string; fontSize: string };
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Two open implementation choices** for the CRM endpoint to resolve when building Phase 3:
|
|||
|
|
|
|||
|
|
1. **`Id` field**: NocoDB used numeric ids (1, 2, 3...). CRM uses UUIDs. The CRM endpoint could either (a) expose the `id` UUID as a string and update the website's `Berth.Id` to `string`, or (b) maintain a stable `legacy_nocodb_id` on each berth from the import and surface that as the numeric `Id` for backwards compatibility. **Recommendation: (a)** — string UUIDs align with the CRM's internal model and the website only uses `Id` for React-keys / URL building, both of which work fine with strings.
|
|||
|
|
2. **`Status` mapping**: CRM internal status enum is `available / under_offer / sold` (snake_case). The endpoint translates to NocoDB's display form (`Available / Under Offer / Sold`).
|
|||
|
|
|
|||
|
|
### 7.4 Website env additions
|
|||
|
|
|
|||
|
|
`Port Nimara/Website/.env.example` gains:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
# CRM endpoint for live berth data. The website fetches berths from this URL
|
|||
|
|
# instead of reading directly from NocoDB. Production: https://crm.portnimara.com
|
|||
|
|
CRM_PUBLIC_URL=
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 7.5 Sequencing
|
|||
|
|
|
|||
|
|
1. **Phase 0**: Mooring-number normalization SQL runs in CRM. Old `load-berths-to-port-nimara.ts` deleted. NocoDB import script imports remaining data.
|
|||
|
|
2. **Phase 3**: CRM `/api/public/berths` + `/api/public/berths/[mooringNumber]` ship and are verified live (with the right shape) before any website change.
|
|||
|
|
3. **Phase 3 (website repo, separate PR)**: `getBerths()` + `getBerthByMooringNumber()` swap to call CRM. `linkWebsiteInterestToBerth()` deleted along with its call site in `register.ts`. Comment fix in `email.ts`. `CRM_PUBLIC_URL` env var added.
|
|||
|
|
4. **Post-cutover**: NocoDB Berths table can be retired (separate cleanup commit, after running parallel for a couple weeks to confirm).
|
|||
|
|
|
|||
|
|
No website push happens until the CRM endpoint is live and the response shape is verified field-by-field against the live NocoDB output.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 8. Testing strategy
|
|||
|
|
|
|||
|
|
| Area | Test type | Coverage |
|
|||
|
|
| --------------------------- | ------------- | ------------------------------------------------------------------------------------------ |
|
|||
|
|
| `formatBerthRange` | Unit (vitest) | All edge cases listed in §4.6 |
|
|||
|
|
| Recommender ranking | Unit | Synthetic berth fixtures across all 4 tiers + amenity filters + oversize cap |
|
|||
|
|
| `reconcilePdfWithBerth` | Unit | Empty-CRM auto-fill, conflict-detected, all-match no-op |
|
|||
|
|
| PDF parser | Integration | AcroForm sample, OCR sample (using a sample PDF the user will provide), AI fallback mocked |
|
|||
|
|
| Recommender API | Integration | E2E from interest detail to add-to-interest |
|
|||
|
|
| Public `/api/public/berths` | Integration | Unauth call, response shape matches old NocoDB shape, status mapping |
|
|||
|
|
| Send-out flows | Integration | Email lands with correct attachment + correct from-address; audit row written |
|
|||
|
|
| Multi-berth EOI | Integration | Documenso payload includes correct `eoi_berth_range` |
|
|||
|
|
| Public-map filter | Integration | EOI-bundle-only berth shows as available; specific-interest berth shows as under-interest |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 9. Pending items / open questions — updated 2026-05-05
|
|||
|
|
|
|||
|
|
1. **Sample PDF** — Reviewed (`berth_pdf_example/Berth_Spec_Sheet_A1.pdf`). 0 AcroForm fields confirmed via `pdf-lib`. **OCR with positional heuristics is the primary parser path.** AcroForm tier is built defensively but won't match this PDF family. AI fallback only when OCR confidence dips below threshold.
|
|||
|
|
2. **PDF format observations** (drives parser layout-aware extraction):
|
|||
|
|
- Header line: `BERTH NUMBER` then `<Mooring number, large display> <nominal size, smaller superscript>` (e.g. "A1" + "200'").
|
|||
|
|
- Right column, top: dimensional fields in `Label: <imperial> / <metric>` format — Length, Width, Water Depth.
|
|||
|
|
- Right column, mid: extra fields in `Label: <value>` form — Bow Facing, Pontoon, Power Capacity (kW), Voltage at 60 Hz, Max. draught of vessel.
|
|||
|
|
- Mid-page: `PURCHASE PRICE:` block (Fee Simple OR Strata Lot tenure); `WEEK HIGH / LOW`; `DAY HIGH / LOW`; pricing-validity date sentence ("CONFIRMED THROUGH UNTIL <date>").
|
|||
|
|
- Specifications block: Mooring Type, Cleat Type, Cleat Capacity, Bollard Type, Bollard Capacity, Access — all per-berth.
|
|||
|
|
- Amenities + Services blocks: appear constant across all berths (port-wide). **Not parsed per-berth; modelled as port-level configuration.**
|
|||
|
|
3. **Schema gaps revealed by the PDF** — five berth columns to add in Phase 0:
|
|||
|
|
- `weekly_rate_high_usd numeric` / `weekly_rate_low_usd numeric`
|
|||
|
|
- `daily_rate_high_usd numeric` / `daily_rate_low_usd numeric`
|
|||
|
|
- `pricing_valid_until date` (the "ALL PRICES ABOVE ARE CONFIRMED THROUGH UNTIL <date>" line — used to flag stale pricing on the berth detail page)
|
|||
|
|
4. **Sample brochure** — Not yet shared. Lower priority; brochure UI doesn't depend on parsing the brochure file.
|
|||
|
|
5. **SMTP app password for `sales@portnimara.com`** — Will be obtained close to production cutover. Not blocking Phases 0–6. Phase 7 ships the email-account admin UI immediately; the credential gets entered when available.
|
|||
|
|
6. **`CRM_PUBLIC_URL`** — Confirmed: `https://crm.portnimara.com` once live. **Configurable via env var `CRM_PUBLIC_URL` on the website repo, not hard-coded.** Phase 3 endpoint must work when accessed via that hostname.
|
|||
|
|
7. **Tenure/lease decoration** — `tenure_type / tenure_years / tenure_start_date / tenure_end_date` already exist on `berths`. The PDF's "Fee Simple OR Strata Lot" maps to a per-berth `tenure_type` value. Add `'fee_simple'` and `'strata_lot'` to the allowed enum.
|
|||
|
|
8. **AI parse OPENAI_API_KEY** — Already optional in env. The AI fallback path checks `if (!env.OPENAI_API_KEY) return null` and surfaces a clear error in the rep UI if AI is requested but unavailable.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 10. Out of scope (for this branch)
|
|||
|
|
|
|||
|
|
- OAuth-per-rep email (deferred; configurable design lets us add it later as another credential type).
|
|||
|
|
- CRM-side regeneration of berth PDFs from a designed template (PDFs are external uploads only).
|
|||
|
|
- Public-form amenity capture (rep-driven filters only for now).
|
|||
|
|
- Decommissioning the NocoDB Berths table (happens after the cutover is verified; separate cleanup commit).
|
|||
|
|
- /clients and /interests view-saving improvements beyond the column redesign.
|
|||
|
|
- Postgres `bytea` storage as a third backend option (rejected per §4.7a — pluggable s3/filesystem covers the "no MinIO required" use case without WAL/backup bloat).
|
|||
|
|
- GDPR cascade behavior for `document_sends` (left as `OPEN` in §14.10; default lean: anonymize-PII keep metadata; revisit when Phase 7 schema lands).
|
|||
|
|
- Multi-node filesystem-backend support (filesystem requires single-node; multi-node deployments must use s3-compatible).
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 11. Risks and mitigations
|
|||
|
|
|
|||
|
|
| Risk | Mitigation |
|
|||
|
|
| -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
|||
|
|
| Schema refactor (drop `interest.berthId`) breaks callers | **No prod data yet** (confirmed 2026-05-05) — refactor lands in one commit (drop column + add junction + update callers + migrate the small dev dataset). Every schema change ships as a Drizzle migration; `db:generate` is required for every schema-touching commit so the migration chain stays consistent for the eventual prod deployment. |
|
|||
|
|
| OCR misreads silently corrupt berth data | Diff dialog mandatory for non-null conflicts; auto-fill only when CRM is null. Audit log enables rollback. |
|
|||
|
|
| Mass-send via the new flows | Per-user hourly send rate limit; dry-run preview mode. |
|
|||
|
|
| Documenso template change breaks the EOI bundle render | `eoi_berth_range` token added to `VALID_MERGE_TOKENS` allow-list; tests assert formatter output for golden inputs. |
|
|||
|
|
| Website cutover breaks the public berth map | CRM endpoint cached (5-min TTL stale-while-revalidate); off-hours cutover; rollback is reverting one website file. NocoDB stays read-only during the parallel-run window. |
|
|||
|
|
| **Brochure PDFs are 20MB+** | See §11.1 below. |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### 11.1 Large brochure PDFs (20MB+) — handling
|
|||
|
|
|
|||
|
|
Brochures are port-wide marketing PDFs and run **20MB+** (per user, 2026-05-05). This is well above what default Next.js / browser upload paths handle gracefully. Implementation requirements:
|
|||
|
|
|
|||
|
|
**Upload path:**
|
|||
|
|
|
|||
|
|
- Use **direct-to-MinIO presigned PUTs** for brochure (and per-berth PDF) uploads, not server-proxied multipart. The CRM hands the browser a presigned URL; the browser uploads directly; only metadata (s3_key, size, sha256) is POSTed to the CRM. Avoids hitting Next.js's body-size limit and frees the Node.js server from holding 20MB+ in memory.
|
|||
|
|
- The existing MinIO infrastructure (`src/lib/minio/index.ts`) already supports presigned URL generation; reuse it.
|
|||
|
|
- Server-side validation: signed URL constrains content-type to `application/pdf` and max size to **50 MB** for brochures, **15 MB** for per-berth PDFs (per-port admin can adjust both via system_settings).
|
|||
|
|
- After the browser-side upload completes, CRM verifies via HEAD request that the object exists at the expected size + content-type before writing the `brochure_versions` / `berth_pdf_versions` row. Orphan blobs from failed uploads are cleaned up by a daily worker job.
|
|||
|
|
|
|||
|
|
**Download / send-out path:**
|
|||
|
|
|
|||
|
|
- Outbound emails attach the PDF by streaming from MinIO into nodemailer rather than buffering. The existing email service can stream attachments — confirmed via `email/index.ts` review.
|
|||
|
|
- **Default attachment threshold: 15 MB raw**. Files at or below this go as an attachment; files above go as a short-lived (24 h) signed-URL download link in the email body instead. Per-port admin can override.
|
|||
|
|
- Rationale (encoded size = raw × ~1.37 due to base64): 15 MB raw = ~20.5 MB encoded, which fits Gmail (25 MB), Yahoo (25 MB), AOL (25 MB), Office 365 default (20 MB) and iCloud (20 MB). May still bounce on stricter corporate gateways (~10 MB), but those are recovered via the manual-confirmed fallback below.
|
|||
|
|
- Sample brochure today (`Port-Nimara-Brochure-March-2025`) is 10.26 MB raw → ~14 MB encoded → ships comfortably as attachment everywhere normal.
|
|||
|
|
- **Pre-send size check is the only automatic path** — the threshold decision is made _before_ the send reaches the SMTP relay. One delivery, one outcome. Eliminates the only realistic duplicate-send scenario (forwarding chains where the company gateway accepts the attachment and Gmail rejects the forwarded copy).
|
|||
|
|
- **Async-bounce handling — manual confirmation, not auto-retry**: when an SMTP size-rejection arrives asynchronously (relay returned 250, recipient gateway rejected later — common with corporate gateways behind forwarding rules), the system:
|
|||
|
|
- Polls `sales@`'s inbox via IMAP (`sales_imap_host` / `_port` / `_user` / `_pass_encrypted`) for incoming bounce messages. The CRM already has `imapflow` + `mailparser` as dependencies (used elsewhere for client-conversation IMAP), so this is incremental work, not a new library.
|
|||
|
|
- Parses each DSN, identifies size-related codes (`552 "Message size exceeds maximum"`, `552-5.3.4 "Message too big for system"`, `5.2.3 "Message size exceeds fixed maximum"`, etc.) via a bounce-classifier.
|
|||
|
|
- Matches the bounce back to a `document_sends` row by message-id (the `messageId` column already exists on the schema).
|
|||
|
|
- Marks the row with `bounce_reason` + `bounce_classified_as = 'size'`.
|
|||
|
|
- Surfaces a banner on the document timeline entry: "Delivery rejected — recipient gateway said the file was too large. [Resend as link]".
|
|||
|
|
- The rep clicks once to retry as a link. Idempotent (retry uses the existing `document_sends.id` with a `retried_at` flag — re-clicking does nothing if a retry already succeeded).
|
|||
|
|
- **Never auto-retries an async bounce.** The forwarding-chain edge case (company.com accepts the attachment + Gmail rejects the forward) is unavoidable from the sender side; we just refuse to silently make it worse by sending the link unprompted.
|
|||
|
|
|
|||
|
|
**Bounce monitoring requires IMAP credentials, not just SMTP** — the CRM cannot read bounces with send-only access. Both Gmail/Workspace and M365 deliver DSNs to the sending account's inbox; both expose IMAP at `imap.gmail.com:993` and `outlook.office365.com:993` respectively, both authenticated with the same app password as their SMTP. If the user provides only SMTP credentials, the bounce-monitor is disabled gracefully and the size-rejection banner won't appear (the send-out flow still works for everything ≤ threshold).
|
|||
|
|
|
|||
|
|
**Provider-agnostic settings** — see §4.9 for the full setting list. The CRM doesn't bake in "Gmail" or "M365" assumptions; the admin UI in `/[portSlug]/admin/email` exposes a simple "Use Gmail preset" / "Use Microsoft 365 preset" button that fills in the right host/port/secure values, but the underlying storage is generic and a future swap from one to the other is a config change, not a redeploy.
|
|||
|
|
|
|||
|
|
- The MinIO bucket has a per-object lifecycle rule deleting expired download tokens after 30 days.
|
|||
|
|
|
|||
|
|
**Storage cost:**
|
|||
|
|
|
|||
|
|
- Versioned brochures + 5–10 brochures per port × 20–50 MB each × every version retained ≈ a few hundred MB per port over the project lifetime. Cheap on MinIO.
|
|||
|
|
|
|||
|
|
**Schema changes for size limits / URL strategy:**
|
|||
|
|
|
|||
|
|
```sql
|
|||
|
|
ALTER TABLE brochure_versions ADD COLUMN download_url_expires_at timestamp with time zone;
|
|||
|
|
ALTER TABLE berth_pdf_versions ADD COLUMN download_url_expires_at timestamp with time zone;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
(used to throttle regeneration of the signed URLs — only re-sign when the cached one is within 1 hour of expiry).
|
|||
|
|
|
|||
|
|
**system_settings additions:**
|
|||
|
|
|
|||
|
|
| Key | Default | Purpose |
|
|||
|
|
| --------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| `brochure_max_upload_mb` | 50 | Per-port cap on brochure file size |
|
|||
|
|
| `berth_pdf_max_upload_mb` | 15 | Per-port cap on per-berth PDF file size |
|
|||
|
|
| `email_attach_threshold_mb` | 15 | Files larger than this go as a download link instead of an attachment. Auto-fallback to link on SMTP size-rejection regardless of this setting. |
|
|||
|
|
|
|||
|
|
### 11.2 Schema/migration discipline (since we're working without prod data)
|
|||
|
|
|
|||
|
|
User confirmed (2026-05-05) that there's no production data depending on the current CRM schema, so refactors don't need backwards-compatibility shims. **However**, every schema change still ships as a proper Drizzle migration:
|
|||
|
|
|
|||
|
|
- Run `pnpm db:generate` after every schema edit.
|
|||
|
|
- Verify the generated SQL matches the snapshot diff — no missing columns, no orphaned constraints.
|
|||
|
|
- Verify the prevId chain in `meta/_journal.json` is contiguous (the audit pattern from the recent migration-chain repair).
|
|||
|
|
- Migrations are run on dev via `pnpm db:push` (or applied directly via psql for one-off backfills like the mooring-number normalization).
|
|||
|
|
- Every backfill SQL is idempotent (safe to re-run) and committed alongside the migration that introduces the columns.
|
|||
|
|
- The migration that drops `interest.berth_id` runs **after** the migration that creates `interest_berths` and the data migration that copies values across.
|
|||
|
|
|
|||
|
|
The "we can break things in dev" license is for **shape changes**, not for sloppy migration files. The chain has to be clean for the eventual prod cutover, where re-running migrations from scratch must produce the exact same end state.
|
|||
|
|
|
|||
|
|
## 12. Definition of done
|
|||
|
|
|
|||
|
|
- [ ] All vitest tests pass (currently 956 — target 1000+ after this work).
|
|||
|
|
- [ ] All Playwright smoke + exhaustive projects pass.
|
|||
|
|
- [ ] tsc clean.
|
|||
|
|
- [ ] ESLint clean.
|
|||
|
|
- [ ] CRM `/api/public/berths` returns the full berth data, gets the website cutover, and the website's NocoDB-read code path is removed.
|
|||
|
|
- [ ] /clients list shows email + phone + country for every record that has the data.
|
|||
|
|
- [ ] /interests list shows yacht + desired dims + EOI status.
|
|||
|
|
- [ ] Recommender panel renders on every interest detail with desired dimensions; produces 5–8 ranked options.
|
|||
|
|
- [ ] Berth detail page accepts PDF uploads, shows version history, performs reconcile-diff dialog on conflicts.
|
|||
|
|
- [ ] Sales rep can send a berth PDF or brochure to a client; audit row appears in the activity timeline.
|
|||
|
|
- [ ] Per-port admin can configure recommender weights, fall-through policy, brochures, and email-from settings.
|
|||
|
|
- [ ] Super_admin can switch the storage backend between s3 and filesystem; migration CLI works in both directions.
|
|||
|
|
- [ ] All berth mooring numbers in CRM match the canonical `^[A-Z]+\d+$` format (no hyphens, no leading zeros).
|
|||
|
|
- [ ] Sample berth PDF (`berth_pdf_example/Berth_Spec_Sheet_A1.pdf`) parses end-to-end via OCR with positional heuristics; extracted fields match the actual berth data.
|
|||
|
|
- [ ] Sample brochure (`berth_pdf_example/Port-Nimara-Brochure-March-2025_5nT92g.pdf`, 10.26 MB) sends as an attachment to a Gmail recipient without falling back to a download link.
|
|||
|
|
- [ ] CLAUDE.md updated with new conventions.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 13. CLAUDE.md additions to land in Phase 8
|
|||
|
|
|
|||
|
|
- Multi-berth interest model: `interest_berths` is the source of truth; `interest.berthId` no longer exists. `is_primary` semantics; `is_specific_interest` semantics; `is_in_eoi_bundle` semantics.
|
|||
|
|
- Berth PDFs: versioned via the active storage backend; `berths.current_pdf_version_id` always points to the latest active version.
|
|||
|
|
- Brochures: per-port; default brochure marked via `is_default` flag; archived brochures retain version history.
|
|||
|
|
- Send-from accounts: configurable via `system_settings`; defaults to `sales@portnimara.com` for human-touch and `noreply@portnimara.com` for automation.
|
|||
|
|
- Recommender: pure SQL ranking — no AI. Per-port admin tunes weights via `system_settings`. Heat scoring applies only to fall-through berths.
|
|||
|
|
- EOI bundle: render via `formatBerthRange()` for the Documenso `eoi_berth_range` merge field; CRM UI always shows individual berth chips.
|
|||
|
|
- Public berths API: `/api/public/berths` is the single source of truth for the public website; output shape is stable and versioned.
|
|||
|
|
- Mooring number canonical format: `^[A-Z]+\d+$` (e.g. `A1`, `B12`, `E18`) — no hyphen, no leading zeros. Stored, displayed, URL-encoded, and rendered in EOIs in this exact form.
|
|||
|
|
- Storage backend: code never imports MinIO/S3 directly. All file I/O goes through `getStorageBackend()` from `src/lib/storage`. Switching backends is a `system_settings` change + `pnpm tsx scripts/migrate-storage.ts` run.
|
|||
|
|
- Filesystem storage backend is **single-node only**. Multi-node deployments must use the s3-compatible backend.
|
|||
|
|
- Email send-out: `document_sends` is the audit table (separate from `audit_logs` because volume + binary refs). Bounce monitoring requires IMAP credentials in addition to SMTP — without them, the size-rejection banner stays disabled.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 14. Edge-case audit (approved 2026-05-05)
|
|||
|
|
|
|||
|
|
Severity legend: 🔴 critical (data loss/corruption, wrong-recipient sends), 🟠 high (breaks workflow), 🟡 medium (unexpected behavior, needs handling), 🟢 low (cosmetic / future-proofing).
|
|||
|
|
|
|||
|
|
### 14.1 Phase 0 — NocoDB berth import + mooring normalization
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ------------------------------------------------------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Mooring number collisions across NocoDB rows | 🔴 | Unique constraint on `(port_id, mooring_number)` already in CRM schema. Import uses `ON CONFLICT (port_id, mooring_number) DO UPDATE` so collisions show as updates, not duplicates. |
|
|||
|
|
| User edits berth in CRM, then NocoDB also gets edited — whose change wins? | 🟠 | Compare `berths.updated_at` against `last_imported_at`. If `updated_at > last_imported_at`, the CRM row was edited by a human after the last import — preserve it (skip overwrite). Per-row dry-run report shows what was kept vs overwritten. Run with `--force` flag to override. |
|
|||
|
|
| NocoDB row deleted between imports | 🟠 | Don't auto-delete CRM-side. Mark as `import_status='orphaned'` and surface in admin UI for manual resolution. |
|
|||
|
|
| Numeric fields with units appended ("63ft") or units mismatched | 🟠 | Run all numeric extractions through `parseDecimalWithUnit()` that strips trailing units, normalizes to ft. NocoDB's metric formula columns ignored — recomputed from imperial. |
|
|||
|
|
| `Length` decimal(2) in NocoDB vs unbounded `numeric` in CRM — re-imports trigger spurious "changed" detection on rounding | 🟡 | Normalize CRM-side to 2 decimals on import. Diff detection compares rounded values. |
|
|||
|
|
| Concurrent import (two reps trigger simultaneously) | 🟡 | `pg_advisory_lock(BERTH_IMPORT_LOCK_KEY)` for the duration. Second invocation waits or errors immediately depending on flag. |
|
|||
|
|
| NocoDB API pagination (defensive) | 🟢 | Loop with `pageSize=100` until `next` is null. |
|
|||
|
|
| `Map Data` JSON has missing/unexpected keys | 🟢 | Validate against zod schema; on mismatch, log + skip the map_data update for that row, don't fail the whole import. |
|
|||
|
|
| Status mismatch (NocoDB has 3, CRM has more nuanced flags) | 🟡 | Map NocoDB `Available`→`available`, `Under Offer`→`under_offer`, `Sold`→`sold`. CRM-side `status_override_mode` preserved if set. |
|
|||
|
|
| Mooring normalization regex edge cases (multi-letter prefix, all-digit) | 🟡 | Regex `^([A-Z]+)-?0*(\d+)$` handles single-letter, multi-letter, hyphenated, leading-zero. Pure-numeric: explicit check + log if encountered. |
|
|||
|
|
| Backfill SQL runs while NocoDB import also running | 🟡 | Sequential within the same migration script: normalize first, import second. |
|
|||
|
|
|
|||
|
|
### 14.2 Phase 1 — /clients + /interests column fix
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ---------------------------------------------------------------------------- | --- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Multiple `is_primary=true` contacts on one client | 🟠 | Add unique partial index `WHERE is_primary=true` on `client_contacts(client_id)`. Backfill: keep most recently-updated as primary, demote others. |
|
|||
|
|
| Phone has `value_country='SX'` but client is actually a US visitor | 🟡 | Phone country is a _proxy_. Display column as "Country" not "Nationality". |
|
|||
|
|
| Backfill race: backfill SQL runs while user edits `nationality_iso` manually | 🟢 | `WHERE nationality_iso IS NULL` clause never overwrites manually-set values. Re-running safe. |
|
|||
|
|
| Client has no phone (only email or nothing) | 🟢 | Country column "-". `nationality_iso` stays null. |
|
|||
|
|
| Phone parses but `value_country` is null | 🟢 | Country column "-". |
|
|||
|
|
| Yacht-by-clientId join: client owns multiple yachts | 🟡 | List view shows `latestYacht.name`, picked by `currentOwnerSince` desc. Detail view shows full list. |
|
|||
|
|
| Desired dimensions on `interests` partially filled (length only) | 🟡 | Render as `"60ft × ? × ?"`. Recommender treats null dimensions as "no constraint" for that axis. |
|
|||
|
|
| Residential interests have a different schema | 🟢 | Already separate routes; don't mix into the marina `/interests` list. |
|
|||
|
|
|
|||
|
|
### 14.3 Phase 2 — M:M schema refactor (`interest_berths`)
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ---------------------------------------------------------------------------------------- | --- | ---------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Existing `interest.berth_id` points to deleted berth | 🔴 | Pre-flight check before migration: orphan rows logged + halted. User decides: skip or restore. |
|
|||
|
|
| Migrating from NocoDB's `Interested Parties` M:M: same interest+berth pair appears twice | 🟡 | Dedup on `(interestId, berthId)` before INSERT. |
|
|||
|
|
| Schema enforces "one primary per interest" but data migration writes more than one | 🟠 | Migration script asserts only one primary per interest before INSERT. If multiple candidates, pick by EOI status > recency. |
|
|||
|
|
| Berth deleted while linked in `interest_berths` (`onDelete: 'restrict'`) | 🟠 | UI surfaces "Cannot archive — used in N active interests" with click-through to those interests. |
|
|||
|
|
| Interest archived while linked to berths — junction rows persist | 🟢 | `onDelete: 'cascade'` from interest. Archive (`archived_at`) is soft; recommender filters out archived interests by default. |
|
|||
|
|
| Two reps simultaneously edit `is_specific_interest` on the same row | 🟡 | Last-write-wins acceptable; surface socket event for realtime UI sync. |
|
|||
|
|
| Toggle to `is_specific_interest=false` after EOI signed | 🟡 | Allowed (rep might want to mark as "not actively pitched anymore" even after EOI). Doesn't auto-clear `is_in_eoi_bundle`. |
|
|||
|
|
| User tries to add a berth to interest that's already linked | 🟢 | UNIQUE on `(interest_id, berth_id)` → upsert: open the role-toggle dialog instead of inserting duplicate. |
|
|||
|
|
|
|||
|
|
### 14.4 Phase 4 — Recommender
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| -------------------------------------------------------------- | --- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
|||
|
|
| Interest has no desired dimensions | 🟠 | Panel shows "Set desired dimensions to see recommendations" with inline form to enter them. |
|
|||
|
|
| Only some dimensions specified (length only) | 🟡 | Recommender treats unspecified as "no constraint." Banner: "Tighter recommendations available when you add Width and Draft." |
|
|||
|
|
| All feasible berths exceed oversize cap | 🟡 | Empty result with helper text: "No berths within 30% buffer. Show all feasible? [button]" — clicking expands cap to ∞ for that view. |
|
|||
|
|
| Desired width > every berth's width | 🟡 | Empty + "No berths can fit a yacht of this width." |
|
|||
|
|
| Berth status changes during the recommender session | 🟡 | Refresh button + 60-sec auto-refresh. Cache key includes berth status. |
|
|||
|
|
| Recommender SQL on a port with 1000+ berths | 🟡 | Single CTE chain with proper indexes. Benchmark target: <100ms p95 on 5000 berths + 500 interests. |
|
|||
|
|
| Berth has interest from same client as the current one | 🟢 | Highlight: "Already linked to this client's interest from <date>". |
|
|||
|
|
| Heat score from interests that closed _won_ (not fall-through) | 🟠 | Heat only fires when `outcome` is `lost_*` or `cancelled`. Won outcomes mean berth is sold (Tier hidden). |
|
|||
|
|
| Amenity filter where no berth satisfies | 🟢 | Empty + "Adjust filters" prompt. |
|
|||
|
|
| Tier ladder hides D, but rep wants to see all | 🟢 | "Show all stages" toggle in panel header. Doesn't change per-port default. |
|
|||
|
|
| Interest is residential type, panel still tries to render | 🟡 | Panel only mounts on `/interests`, not `/residential/interests`. Explicit guard. |
|
|||
|
|
|
|||
|
|
### 14.5 Phase 5 — EOI bundle + range formatter
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ------------------------------------------------------ | --- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
|||
|
|
| Empty bundle | 🟢 | Returns `""`. EOI generation refuses to send if range is empty. |
|
|||
|
|
| Single berth | 🟢 | `["A1"]` → `"A1"`, not `"A1-A1"`. Tested. |
|
|||
|
|
| Skip numbers (A1, A2, A4) | 🟢 | `"A1-A2, A4"`. Tested. |
|
|||
|
|
| Mixed letters in unsorted input | 🟢 | Sort by `(letter, number)` first. Tested. |
|
|||
|
|
| Duplicate moorings in input | 🟢 | Dedup via `Set` before formatting. |
|
|||
|
|
| Mooring with non-numeric suffix (e.g. `A1a`) | 🟡 | Regex `^([A-Z]+)(\d+)$` — non-conforming inputs pass through unchanged, joined by `, `. Logged warning. |
|
|||
|
|
| Pure-numeric mooring (no letter prefix) | 🟡 | Same fallthrough. Edge case, documented. |
|
|||
|
|
| Very long ranges (50+ consecutive) | 🟢 | "A1-A50" string fits. If exceeds Documenso PDF field max length, truncate with "+N more" and log. |
|
|||
|
|
| Bundle generated for one rep while another modifies it | 🟡 | EOI generation reads the current bundle at trigger time; modifications after that go into the next EOI. Audit log captures snapshot. |
|
|||
|
|
|
|||
|
|
### 14.6 Phase 6 — Per-berth PDF (storage + parser)
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ---------------------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Upload of 0-byte file | 🟠 | Server-side validation: reject `Content-Length: 0`. Pre-check before generating presigned URL. |
|
|||
|
|
| Non-PDF disguised as PDF (wrong magic bytes) | 🔴 | Read first 5 bytes of upload, verify `%PDF-`. On mismatch, delete the MinIO object and reject. |
|
|||
|
|
| Upload exceeds size limit | 🟠 | Presigned URL has `content-length-range` constraint. Failed upload: orphan cleaned by daily worker. |
|
|||
|
|
| Two reps upload simultaneously to the same berth | 🟡 | Storage key uses `gen_random_uuid()` (no version-number conflict at storage layer); DB row decides which one is `current`. Wrap row insert in serializable transaction. |
|
|||
|
|
| Parse fails entirely (corrupt PDF, OCR errors out) | 🟠 | Show as "Saved, parsing failed" with "Retry parse" button. PDF stored regardless; parse failure doesn't block save. |
|
|||
|
|
| Parser extracts duplicate fields | 🟡 | Take first match by Y-coordinate (top of page), warn on duplicate. |
|
|||
|
|
| Parser extracts numbers that are unit-ambiguous | 🟢 | Sample PDF pattern is `<imperial> / <metric>` — parse imperial; verify imperial × 0.3048 ≈ metric (within 1% tolerance). Mismatch → flag for review. |
|
|||
|
|
| OCR confuses 0/O, 1/I/l | 🟡 | Numeric-only fields validated as numbers. Mooring number compared against the berth being uploaded to. |
|
|||
|
|
| Mooring number in PDF doesn't match the berth being uploaded to | 🟠 | Warning dialog: "This PDF's berth number is `B5` but you're uploading to berth `A1`. Continue?" Force re-confirm. |
|
|||
|
|
| Pricing valid-until date in the past at upload time | 🟢 | Accept; UI shows "Pricing data may be stale" chip. Don't block. |
|
|||
|
|
| Imperial vs metric values disagree in PDF (designer typo) | 🟡 | Tolerance check (±1%); above that, log + use imperial as source of truth + warn. |
|
|||
|
|
| Rollback to a version that was uploaded then archived | 🟢 | Versions kept indefinitely (no hard delete). Archive flag is "hide from history view"; rollback can target it. |
|
|||
|
|
| Berth hard-deleted while versions exist | 🟡 | `onDelete: 'cascade'` from berth → versions, but MinIO blobs NOT auto-deleted (referenced from `document_sends`). Daily orphan-cleanup worker handles. |
|
|||
|
|
| Parse extracts data with conflict; rep accepts; immediately reverts via "rollback to v3" | 🟡 | Rollback restores prior PDF as current but does NOT re-parse and re-update DB (separate "extract data from this version" action). Documented behavior. |
|
|||
|
|
|
|||
|
|
### 14.7 Phase 7 — Sales send-outs
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ------------------------------------------------------------------ | --- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Recipient email contains typos | 🔴 | Pre-send confirmation modal with the exact recipient email shown. No quick-send for first-time recipients. |
|
|||
|
|
| Body markdown contains unresolved merge fields | 🟠 | Pre-send dry-run renders the body and lists unresolved `{{tokens}}`. Send blocked until resolved. |
|
|||
|
|
| Body markdown with HTML/script injection | 🔴 | Markdown rendering through DOMPurify before email assembly. Tests include XSS payloads in body. |
|
|||
|
|
| PDF version active at send time gets rolled back later | 🟢 | `document_sends.berth_pdf_version_id` references the exact version row. Versions immutable. Audit always shows what was sent. |
|
|||
|
|
| Brochure marked default but archived | 🟡 | Send UI filters by `is_default=true AND archived_at IS NULL`. Fallback to next-newest non-archived; UI flags "default brochure was archived; using <name>." |
|
|||
|
|
| No brochure exists for the port | 🟢 | "Send brochure" button hidden/disabled. Tooltip: "Upload a brochure in admin first." |
|
|||
|
|
| Per-user send rate limit exceeded | 🟡 | Defaults: 50 sends/user/hour for individual sends, 10 bulk-send operations/user/hour. Clear error: "Hit hourly limit, retry at HH:MM." Audit captures rejected attempts. |
|
|||
|
|
| SMTP credentials wrong/expired | 🔴 | Send fails with clear error: "SMTP authentication failed. Admin: please update credentials in /admin/email." Failed `document_sends` row created with `failed_at` so rep sees it didn't go. |
|
|||
|
|
| IMAP credentials missing | 🟢 | Bounce monitor disabled with banner: "IMAP not configured — bounce-rejection banners won't appear in document timelines." Sends still work. |
|
|||
|
|
| Multiple primary emails on a client (defensive) | 🟢 | Picker shows all emails in dropdown; rep selects. No silent "use the first one." |
|
|||
|
|
| Recipient unsubscribed | 🟡 | Out of scope for this branch. Future: `unsubscribed_at` column on clients; send UI checks and blocks. Documented in §10. |
|
|||
|
|
| Email body length limits | 🟢 | Body Markdown 50KB max; UI shows char count. |
|
|||
|
|
| Bulk send: one of N recipients fails | 🟠 | Each recipient = separate `document_sends` row + separate SMTP transaction. One failure doesn't block others. Per-recipient status visible in bulk-send result panel. |
|
|||
|
|
| Two reps send the same berth PDF to the same client within seconds | 🟢 | Both succeed; both rows in `document_sends`. Not a duplicate concern. |
|
|||
|
|
| Customizing body to remove the system-added link/attachment | 🟠 | Link/attachment added by system AFTER body merge — rep can't accidentally remove it. Custom body input is purely message text. |
|
|||
|
|
|
|||
|
|
### 14.8 Phase 3 — Website cutover
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| -------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| `CRM_PUBLIC_URL` points to wrong env (staging website hits prod CRM) | 🔴 | Website logs resolved URL on startup. CRM health-check returns env name (`prod`/`staging`/`dev`); website refuses to start if env mismatch. |
|
|||
|
|
| CRM endpoint slow → website times out | 🟠 | Website's `$fetch` has 5s timeout. On timeout, falls back to in-memory cache from last successful fetch (5-min TTL). |
|
|||
|
|
| CRM endpoint completely down | 🟡 | Same fallback. Website logs failure; uptime alerting fires. |
|
|||
|
|
| Status mapping returns a value the public `Berth.Status` enum doesn't know | 🟡 | CRM endpoint validates return shape against public enum before responding. Unknown value → 500 + clear error log. |
|
|||
|
|
| Concurrent reads on cache invalidation (thundering herd) | 🟡 | Next.js `revalidate` with `stale-while-revalidate` semantics. Spike handling via cache lock. |
|
|||
|
|
| Website still has old NocoDB code path in deployed bundle | 🟢 | Website cutover commit is one file. Single deploy gets it through. |
|
|||
|
|
| Mooring number format drift between CRM and what the website URL expects | 🔴 | Phase 0 normalization runs before any cutover. Integration tests assert format matches. Critical because `/berths/A1` would 404 if CRM stores `A-01`. |
|
|||
|
|
| Public endpoint exposes data that should be private | 🟠 | Output is deliberate allowlist of fields, not `SELECT *`. Status-mapping logic ensures `archived_at IS NOT NULL` berths are filtered out. |
|
|||
|
|
|
|||
|
|
### 14.9 Async bounce monitor (Phase 7 admin config)
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| -------------------------------------------------------------------------- | --- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Bounce email parsed wrong, classified as size when actually content-policy | 🟡 | Conservative classifier: only flag as `bounce_classified_as='size'` when status code matches known list (`552 5.2.3`, `5.3.4`). Other rejections flagged generically. |
|
|||
|
|
| Bounce email with no `Message-ID` references | 🟢 | Match by recipient + timestamp window. Fall back to "could not match" if both fail. |
|
|||
|
|
| IMAP connection drops mid-fetch | 🟡 | Workers reconnect with backoff; idempotent processing on retry (each bounce email's `Message-ID` is dedup key). |
|
|||
|
|
| Bounce arrives for a `document_sends` row that's been deleted | 🟢 | Match-failure logged; bounce-handler treats as no-op. |
|
|||
|
|
| Forwarding chain (corp accepts → personal Gmail rejects) | 🟠 | Don't auto-retry. Surface rejection to rep. (Documented in §11.1.) |
|
|||
|
|
| Out-of-office auto-replies wrongly classified as bounces | 🟢 | Classifier requires DSN content-type (`message/delivery-status`) — auto-replies don't match. |
|
|||
|
|
|
|||
|
|
### 14.9a Phase 6a — Pluggable storage backend
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| ------------------------------------------------------------------------------------- | --- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Migration interrupted halfway (process crash, deploy, network) | 🟠 | Resumable: per-row `migrated_at` markers in a temp `_storage_migration_progress` table. Re-running picks up where it left off. Verify sha256 on each row before marking. |
|
|||
|
|
| Disk fills up during migration into filesystem mode | 🟠 | Pre-flight check sums total bytes from source backend, compares against `df` available space + 20% margin. Aborts before starting if insufficient. |
|
|||
|
|
| Filesystem mode on a multi-node deployment (each node has own disk) | 🔴 | Filesystem backend refuses to start when `process.env.MULTI_NODE_DEPLOYMENT=true`. Health check fails on second node when it can't read files written by first node. Admin docs spell out: filesystem = single-node only. |
|
|||
|
|
| Symlinks / file permission issues in filesystem mode | 🟡 | Storage root is created with `0o700` permissions (owner-only). Symlink attacks blocked: realpath check that resolved path is within storage root. |
|
|||
|
|
| Path traversal via filename in filesystem mode | 🔴 | Storage keys validated against `^[a-zA-Z0-9/_.-]+$` regex; reject anything containing `..` or absolute paths. Files always written to `path.join(STORAGE_ROOT, sanitizedKey)`. |
|
|||
|
|
| Switch backend mid-upload (admin clicks "switch" while a rep is uploading a brochure) | 🟠 | Migration acquires advisory lock; new uploads during migration are queued (block on the lock) and complete after migration finishes. Upload UI shows "System maintenance in progress, please wait..." for the few seconds it takes. |
|
|||
|
|
| Both backends temporarily reachable during migration; rep uploads to old backend | 🟡 | The factory always reads `system_settings.storage_backend` fresh on each request (cached briefly). After atomic-flip post-migration, the next request goes to the new backend. Race window is sub-second; if it does happen, the file's in the old backend; daily reconcile worker catches it. |
|
|||
|
|
| Backend-specific URL formats leak to clients (S3 signed URL vs filesystem proxy URL) | 🟡 | Client code only sees the URL string; never inspects/manipulates it. URL format changes are transparent. |
|
|||
|
|
| Downgrading from S3 → filesystem with files larger than disk-cache-friendly | 🟡 | No special handling needed for read; both backends stream. UI warns admin if total size > 50% of free disk during pre-flight. |
|
|||
|
|
| sha256 mismatch on a migrated file (corruption during transfer) | 🟠 | Migration aborts on any sha256 mismatch; the row is logged and the migration overall fails. Admin investigates manually before retrying. |
|
|||
|
|
| Backup story differs per backend | 🟢 | Admin UI shows a "Backup notes" section explaining: S3 mode = "Configure your S3 provider's lifecycle / replication policies"; filesystem mode = "Include the storage/ directory in your backup tool". |
|
|||
|
|
| `STORAGE_FILESYSTEM_ROOT` not writable when first switching to filesystem | 🟠 | Pre-flight: write a sentinel file + delete it. If it fails, surface error in admin UI before starting migration. |
|
|||
|
|
| Existing MinIO references in code use the literal string `s3_key` everywhere | 🟢 | Phase 6a renames columns to `storage_key`. No prod data, so safe to ALTER TABLE. All callers updated in same commit. |
|
|||
|
|
| Filesystem backend served via Next.js — risk of Next.js processing the file | 🟡 | Use `NextResponse` with a `ReadableStream` body and explicit `Content-Type` + `Content-Disposition`. No Next.js image optimization, no edge runtime — Node runtime only for the storage proxy route. |
|
|||
|
|
| Token-replay attack on filesystem proxy URLs | 🟠 | Tokens are HMAC-signed (key from `STORAGE_PROXY_HMAC_SECRET` env), include the storage key + expiry in payload, single-use enforced via short Redis cache of recently-seen tokens. |
|
|||
|
|
|
|||
|
|
### 14.10 Cross-cutting
|
|||
|
|
|
|||
|
|
| Case | Sev | Mitigation |
|
|||
|
|
| -------------------------------------------------------------------------- | --- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|||
|
|
| Multi-port tenant isolation in recommender | 🔴 | Every recommender query joins through `interest.port_id` and filters berths to same port. Tested with port-A interest + port-B berths fixture. |
|
|||
|
|
| Permission escalation: who configures SMTP credentials? | 🔴 | Per-port admin role only. Super_admin can do all ports. Reps cannot see encrypted secrets. |
|
|||
|
|
| Audit log volume — every send + every berth edit logged | 🟡 | Existing `audit_logs` partitioned. Send-out events go to `document_sends` (separate, lighter). |
|
|||
|
|
| Time zone for `pricing_valid_until` expiry | 🟡 | Stored as `date`. Expiry uses port's configured timezone (system_setting). UTC fallback. |
|
|||
|
|
| Concurrent berth edit + send composer open: stale snapshot | 🟢 | `document_sends.berth_pdf_version_id` captured at send-click time, not composer-open. Edits between reflect in sent version. |
|
|||
|
|
| Per-port system_setting collision (two admins set same key simultaneously) | 🟡 | `system_settings.updated_at` + optimistic concurrency. UI re-fetches on save error. |
|
|||
|
|
| Encryption key rotation for `*_pass_encrypted` fields | 🟡 | Use existing `EMAIL_CREDENTIAL_KEY` env. Rotation = ops concern (re-encrypt with new key). Documented in CLAUDE.md as future ops runbook. |
|
|||
|
|
| GDPR / right-to-be-forgotten: client deletion cascades to `document_sends` | 🟠 | **OPEN**: decide between (a) full delete, (b) anonymize PII keep metadata, (c) keep all (legal hold). Default lean: (b). Revisit when Phase 7 schema lands. |
|
|||
|
|
| Data leak via diff dialog: PDF parser exposes data the rep shouldn't see | 🟠 | Upload UI confirms target berth + scopes parser to target's port. Cross-port mismatch → reject upload. |
|
|||
|
|
|
|||
|
|
### 14.11 Critical-item summary (🔴)
|
|||
|
|
|
|||
|
|
Eleven critical items, each requiring a test before its phase ships:
|
|||
|
|
|
|||
|
|
1. NocoDB mooring collisions → unique constraint + ON CONFLICT
|
|||
|
|
2. Non-PDF disguised upload → magic-byte check
|
|||
|
|
3. Recipient email typos → pre-send confirmation
|
|||
|
|
4. XSS in body markdown → DOMPurify + XSS payload tests
|
|||
|
|
5. SMTP credentials silently failing → loud error + failed `document_sends` row
|
|||
|
|
6. Wrong-environment `CRM_PUBLIC_URL` → health-check env match
|
|||
|
|
7. Mooring format drift breaking `/berths/A1` URLs → Phase 0 normalization gates Phase 3
|
|||
|
|
8. Multi-port isolation in recommender → explicit `port_id` filter + cross-port test
|
|||
|
|
9. Permission escalation on SMTP creds → per-port admin only, no rep visibility
|
|||
|
|
10. Filesystem backend in multi-node deployment → refuse to start; documented + health-check enforced
|
|||
|
|
11. Path traversal via storage key in filesystem mode → strict regex validation + path realpath check
|