Files
pn-new-crm/src/lib/db/schema/berths.ts
Matt Ciaccio a3e002852b fix(audit-2): integration regressions + data-integrity from second-pass review
Two reviewer agents did a second-pass deep audit of the 21-commit
refactor. Eight findings; four fixed here (one was deferred with a
schema comment, three were 🟡 nice-to-haves left for follow-up).

Integration regressions (🟠 high):
- Outbound webhook `interest.berth_linked` now fires from the new
  junction-add handler. Was emitting a socket-only event, leaving
  external integrations silent post-refactor.
- Two new webhook events `interest.berth_unlinked` and
  `interest.berth_link_updated` added to WEBHOOK_EVENTS +
  INTERNAL_TO_WEBHOOK_MAP. PATCH and DELETE handlers now dispatch them
  alongside the existing socket emits — lifecycle parity restored.
- BerthInterestPulse adds useRealtimeInvalidation for berth-link
  events. The query key was berth-scoped while the linked-berths
  dialog invalidates interest-scoped keys (no prefix match), so the
  pulse went stale. Bridges via the realtime hook now.

Recommender semantic fix (🟠 medium-high):
- aggregates CTE: active_interest_count now filters on
  `ib.is_specific_interest = true`, matching the public-map "Under
  Offer" derivation. EOI-bundle-only links no longer demote a berth
  to Tier C for other reps. Smoke test confirms previously-all-Tier-C
  results now correctly classify as Tier A.
- Same CTE: `total_interest_count` uses COUNT(ib.berth_id) instead of
  COUNT(*) so a berth with no junction rows reports 0 (not 1 from
  the LEFT JOIN's NULL-right-side row). Prevents heat over-counting.

Data integrity (🟠):
- AcroForm tier rejects negative numerics in coerceFieldValue (was
  letting through `length_ft="-50"` which would poison the
  recommender feasibility filter on apply).
- FilesystemBackend.resolveHmacSecret throws in production when
  storage_proxy_hmac_secret_encrypted is null. Dev still derives from
  BETTER_AUTH_SECRET for ergonomics; prod must explicitly configure.
- Documented the circular FK between berths.current_pdf_version_id
  and berth_pdf_versions.id. Drizzle's `.references()` can't express
  the cycle so the schema column is plain text + a comment; the FK
  is authoritatively maintained by migration 0030.

Tests still 1163/1163. tsc clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:20:38 +02:00

263 lines
11 KiB
TypeScript

import {
pgTable,
text,
boolean,
integer,
numeric,
timestamp,
date,
jsonb,
index,
uniqueIndex,
primaryKey,
} from 'drizzle-orm/pg-core';
import { ports } from './ports';
import { clients } from './clients';
export const berths = pgTable(
'berths',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
portId: text('port_id')
.notNull()
.references(() => ports.id),
mooringNumber: text('mooring_number').notNull(),
area: text('area'),
status: text('status').notNull().default('available'), // available, under_offer, sold
lengthFt: numeric('length_ft'),
widthFt: numeric('width_ft'),
draftFt: numeric('draft_ft'),
lengthM: numeric('length_m'),
widthM: numeric('width_m'),
draftM: numeric('draft_m'),
widthIsMinimum: boolean('width_is_minimum').default(false),
// Numeric: ft (legacy NocoDB stored as plain numbers, no units in value).
nominalBoatSize: numeric('nominal_boat_size'),
nominalBoatSizeM: numeric('nominal_boat_size_m'),
waterDepth: numeric('water_depth'),
waterDepthM: numeric('water_depth_m'),
waterDepthIsMinimum: boolean('water_depth_is_minimum').default(false),
sidePontoon: text('side_pontoon'),
powerCapacity: numeric('power_capacity'), // kW
voltage: numeric('voltage'), // V at 60Hz
mooringType: text('mooring_type'),
cleatType: text('cleat_type'),
cleatCapacity: text('cleat_capacity'),
bollardType: text('bollard_type'),
bollardCapacity: text('bollard_capacity'),
access: text('access'),
price: numeric('price'),
priceCurrency: text('price_currency').notNull().default('USD'),
// Lease/rental rates surfaced by the per-berth PDFs (Phase 6b). Null
// until reps upload PDFs; rendered on the berth detail page with a
// "Pricing data may be stale" chip when pricing_valid_until < today().
weeklyRateHighUsd: numeric('weekly_rate_high_usd'),
weeklyRateLowUsd: numeric('weekly_rate_low_usd'),
dailyRateHighUsd: numeric('daily_rate_high_usd'),
dailyRateLowUsd: numeric('daily_rate_low_usd'),
pricingValidUntil: date('pricing_valid_until'),
bowFacing: text('bow_facing'),
berthApproved: boolean('berth_approved').default(false),
// permanent, fixed_term, fee_simple, strata_lot (the last two map to
// the Fee Simple / Strata Lot tenures shown in the per-berth PDFs).
tenureType: text('tenure_type').notNull().default('permanent'),
tenureYears: integer('tenure_years'),
tenureStartDate: date('tenure_start_date'),
tenureEndDate: date('tenure_end_date'),
statusLastChangedBy: text('status_last_changed_by'), // user ID
statusLastChangedReason: text('status_last_changed_reason'),
statusLastModified: timestamp('status_last_modified', { withTimezone: true }),
// Optional override flag carried over from NocoDB ("auto" or null in legacy data).
// Reserved for future "manual override" semantics; not surfaced in the UI today.
statusOverrideMode: text('status_override_mode'),
// Set by scripts/import-berths-from-nocodb.ts. The import compares this
// against updated_at to detect human edits made after the last import,
// so re-running the import doesn't clobber CRM-side overrides.
lastImportedAt: timestamp('last_imported_at', { withTimezone: true }),
// Pointer to the active per-berth PDF version (Phase 6b). Null until a
// rep uploads the first PDF; a later rollback can re-target this column
// to any prior `berth_pdf_versions.id`. The full history lives in the
// junction table — this column is just the "current" pointer.
currentPdfVersionId: text('current_pdf_version_id'),
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [
index('idx_berths_port').on(table.portId),
index('idx_berths_status').on(table.portId, table.status),
index('idx_berths_area').on(table.portId, table.area),
uniqueIndex('idx_berths_mooring').on(table.portId, table.mooringNumber),
],
);
// Note: `berths.current_pdf_version_id` has an `ON DELETE SET NULL` FK to
// `berth_pdf_versions.id` installed by migration 0030. The column is left
// without a `.references()` / `foreignKey()` declaration in the Drizzle
// schema because the two tables form a circular FK (berth_pdf_versions →
// berths), and Drizzle's relation inference doesn't tolerate the cycle
// when both sides are declared via column-level `.references()`. The
// migration chain authoritatively maintains the constraint; a fresh
// `db:push` against an empty DB would skip the FK and require a follow-up
// generated migration to add it back. This is acceptable because we
// always apply migrations in order in dev/CI/prod.
export const berthMapData = pgTable(
'berth_map_data',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
berthId: text('berth_id')
.notNull()
.unique()
.references(() => berths.id, { onDelete: 'cascade' }),
svgPath: text('svg_path'),
x: numeric('x'),
y: numeric('y'),
transform: text('transform'),
fontSize: numeric('font_size'),
extraData: jsonb('extra_data').default({}),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [uniqueIndex('berth_map_data_berth_id_idx').on(table.berthId)],
);
export const berthRecommendations = pgTable(
'berth_recommendations',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
interestId: text('interest_id').notNull(), // references interests.id
berthId: text('berth_id')
.notNull()
.references(() => berths.id, { onDelete: 'cascade' }),
matchScore: numeric('match_score'), // 0-100
matchReasons: jsonb('match_reasons'), // { "dimensional_fit": 95, "power_match": 80, ... }
source: text('source').notNull().default('ai'), // ai, manual
createdBy: text('created_by'), // user ID for manual recommendations
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [
uniqueIndex('berth_rec_interest_berth_idx').on(table.interestId, table.berthId),
index('idx_br_interest').on(table.interestId),
],
);
export const berthWaitingList = pgTable(
'berth_waiting_list',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
berthId: text('berth_id')
.notNull()
.references(() => berths.id, { onDelete: 'cascade' }),
clientId: text('client_id')
.notNull()
.references(() => clients.id, { onDelete: 'cascade' }),
yachtId: text('yacht_id'), // FK added via relation; nullable (waiting for this yacht)
position: integer('position').notNull(),
priority: text('priority').notNull().default('normal'), // normal, high
notifyPref: text('notify_pref').default('email'), // email, in_app, both
notes: text('notes'),
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [
uniqueIndex('berth_waiting_list_berth_client_idx').on(table.berthId, table.clientId),
index('idx_bwl_berth').on(table.berthId, table.position),
],
);
export const berthMaintenanceLog = pgTable(
'berth_maintenance_log',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
berthId: text('berth_id')
.notNull()
.references(() => berths.id, { onDelete: 'cascade' }),
portId: text('port_id')
.notNull()
.references(() => ports.id),
category: text('category').notNull(), // routine, repair, inspection, upgrade
description: text('description').notNull(),
cost: numeric('cost'),
costCurrency: text('cost_currency').default('USD'),
responsibleParty: text('responsible_party'),
performedDate: date('performed_date').notNull(),
photoFileIds: text('photo_file_ids').array(), // references to files table
createdBy: text('created_by').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
},
(table) => [index('idx_bml_berth').on(table.berthId), index('idx_bml_port').on(table.portId)],
);
/**
* Per-berth PDF version history (Phase 6b — see plan §3.3 / §4.7b).
*
* Each upload creates a new row with a monotonic `versionNumber` per berth.
* The active version is referenced by `berths.current_pdf_version_id`. The
* storage_key points at the file in the active `StorageBackend` (s3/filesystem),
* which is resolved at access time via `getStorageBackend()`.
*
* `parseResults` captures what the 3-tier reverse parser extracted at upload
* time plus any conflicts the rep resolved in the diff dialog. Kept as audit
* trail; rolling back to a prior version does NOT replay these (per §14.6).
*/
export const berthPdfVersions = pgTable(
'berth_pdf_versions',
{
id: text('id')
.primaryKey()
.$defaultFn(() => crypto.randomUUID()),
berthId: text('berth_id')
.notNull()
.references(() => berths.id, { onDelete: 'cascade' }),
versionNumber: integer('version_number').notNull(),
/** Object key in the active storage backend (renamed from `s3_key` per §4.7a). */
storageKey: text('storage_key').notNull(),
fileName: text('file_name').notNull(),
fileSizeBytes: integer('file_size_bytes').notNull(),
contentSha256: text('content_sha256').notNull(),
uploadedBy: text('uploaded_by').notNull(),
uploadedAt: timestamp('uploaded_at', { withTimezone: true }).notNull().defaultNow(),
/** Cached signed-URL expiry per §11.1 — re-sign only when within 1h of expiry. */
downloadUrlExpiresAt: timestamp('download_url_expires_at', { withTimezone: true }),
/** { engine: 'acroform'|'ocr'|'ai', extracted: {...}, conflicts: [...], appliedFields: [...] } */
parseResults: jsonb('parse_results'),
},
(table) => [
uniqueIndex('berth_pdf_versions_berth_version_idx').on(table.berthId, table.versionNumber),
index('idx_bpv_berth').on(table.berthId, table.uploadedAt),
],
);
export const berthTags = pgTable(
'berth_tags',
{
berthId: text('berth_id')
.notNull()
.references(() => berths.id, { onDelete: 'cascade' }),
tagId: text('tag_id').notNull(), // references tags.id
},
(table) => [primaryKey({ columns: [table.berthId, table.tagId] })],
);
export type Berth = typeof berths.$inferSelect;
export type NewBerth = typeof berths.$inferInsert;
export type BerthMapData = typeof berthMapData.$inferSelect;
export type NewBerthMapData = typeof berthMapData.$inferInsert;
export type BerthRecommendation = typeof berthRecommendations.$inferSelect;
export type NewBerthRecommendation = typeof berthRecommendations.$inferInsert;
export type BerthWaitingList = typeof berthWaitingList.$inferSelect;
export type NewBerthWaitingList = typeof berthWaitingList.$inferInsert;
export type BerthMaintenanceLog = typeof berthMaintenanceLog.$inferSelect;
export type NewBerthMaintenanceLog = typeof berthMaintenanceLog.$inferInsert;
export type BerthPdfVersion = typeof berthPdfVersions.$inferSelect;
export type NewBerthPdfVersion = typeof berthPdfVersions.$inferInsert;