fix(audit-verification): regressions found in post-Tier-6 review

Two parallel reviews of the Tier 0–6 work surfaced one CRITICAL
regression and a handful of remaining cross-tenant gaps that the
original audit didn't enumerate. All fixed here:

CRITICAL
* document-reminders.processReminderQueue — the new bulk-fetch
  leftJoin to documentTemplates was scoped on `templateType` alone.
  Templates of the same type exist in every port; the cartesian
  explosion would have fired one Documenso reminder PER matching
  template-row per cron tick (a 5-port deploy = 5 reminders to the
  same signer per cycle). Added eq(documentTemplates.portId, portId)
  to the join.
* All five remaining Documenso webhook handlers (RecipientSigned /
  Completed / Opened / Rejected / Cancelled) accept and require an
  optional portId now, with a shared resolveWebhookDocument() helper
  that refuses to mutate when the lookup is ambiguous across tenants
  without a resolved port. Tier 5's port-scoping was applied only to
  Expired; the route now forwards the matched portId to every
  handler. Tightens the WHERE clauses on subsequent UPDATEs to (id,
  portId) for defense-in-depth.

HIGH
* verifyDocumensoSecret rejects when `expected` is empty —
  timingSafeEqual(0-bytes, 0-bytes) was returning true, so a dev env
  with a blank DOCUMENSO_WEBHOOK_SECRET would accept a request whose
  X-Documenso-Secret header was also missing/empty.
  listDocumensoWebhookSecrets skips the env entry when blank.
* /api/public/health — the website-intake-secret comparison was a
  string `===` (not constant-time). Switched to timingSafeEqual via
  Buffer.from().

MEDIUM
* server.ts SIGTERM ordering — Socket.io closes BEFORE the HTTP
  drain so long-poll websockets stop holding the server open past
  the compose stop_grace_period.
* /api/v1/me PATCH preferences merge — allow-list filter on the
  merged JSONB so legacy rows from the old .passthrough() era stop
  silently re-shipping their bloat to disk.

Migration fixes (deploy-blocking)
* 0041 referenced `port_role_overrides.permissions` (column is
  `permission_overrides`) — overrides are partial JSONB and don't
  need backfilling at all (deepMerge resolves edit from the base
  role). Removed the override UPDATEs entirely.
* 0042 switched all FK + CHECK adds to NOT VALID + VALIDATE so the
  brief table-lock phase is decoupled from the row-scan validation,
  giving a cleaner abort-and-restart story if a constraint catches
  dirty production data. Added a pre-cleanup UPDATE for
  invoices.billing_entity_id = '' rows (backfills from clientName,
  falls back to the row id) so the new non-empty CHECK passes on a
  dirty table.

Test status: 1175/1175 vitest, tsc clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Matt Ciaccio
2026-05-05 21:19:39 +02:00
parent 83239104e0
commit 63c4073e64
10 changed files with 218 additions and 150 deletions

View File

@@ -13,6 +13,13 @@
-- jsonb_set with create_missing=true (the default) inserts the key only
-- when it's absent, so re-runs are idempotent and the migration is safe
-- against a partial run.
--
-- Note: per-port overrides live in `port_role_overrides.permission_overrides`
-- and are PARTIAL — they only contain the keys a port flipped from the
-- base role. The deepMerge resolver fills in `documents.edit` from the
-- base role for any port that didn't override it, so we deliberately do
-- NOT touch `port_role_overrides` here. Backfilling there would synthesize
-- override entries that the operator never intended.
UPDATE roles
SET permissions = jsonb_set(
@@ -33,26 +40,3 @@ SET permissions = jsonb_set(
)
WHERE permissions->'files' IS NOT NULL
AND NOT (permissions->'files' ? 'edit');
-- Same backfill on per-port overrides (`port_role_overrides.permissions`)
-- so an override that flipped a sibling permission stays consistent.
UPDATE port_role_overrides
SET permissions = jsonb_set(
permissions,
'{documents,edit}',
COALESCE(permissions->'documents'->'create', 'false'::jsonb),
true
)
WHERE permissions->'documents' IS NOT NULL
AND NOT (permissions->'documents' ? 'edit');
UPDATE port_role_overrides
SET permissions = jsonb_set(
permissions,
'{files,edit}',
COALESCE(permissions->'files'->'upload', 'false'::jsonb),
true
)
WHERE permissions->'files' IS NOT NULL
AND NOT (permissions->'files' ? 'edit');

View File

@@ -4,8 +4,11 @@
-- a service that writes interest_id='nonexistent' faces no DB rejection
-- and downstream null-tolerant joins silently misbehave.
--
-- All adds are NOT VALID-friendly (we use IF NOT EXISTS via DO blocks);
-- the migration is idempotent so re-running it is safe.
-- All adds are idempotent (DO blocks swallow duplicate_object) and use
-- the NOT VALID + VALIDATE pattern so the brief table-lock phase is
-- decoupled from the slow row-scan validation. If validation fails for
-- a constraint the migration aborts before later constraints land — a
-- prod operator can clean the dirty row(s) and re-run.
--
-- Cascade rule:
-- nullable column → ON DELETE SET NULL (orphan tolerance)
@@ -14,77 +17,106 @@
--
-- Refs: docs/audit-comprehensive-2026-05-05.md HIGH §10 (auditor-C3 Issue 1).
-- documents
-- ─── Pre-cleanup: invoices.billing_entity_id non-empty ──────────────────────
-- The previous schema declared notNull().default(''), so historical rows
-- may carry empty strings. The CHECK constraint added below would reject
-- the migration on a dirty table. Backfill from clientName when present
-- so the resolver downstream can still find a usable handle, otherwise
-- fall back to the row id (which gives the operator a unique marker to
-- audit later). This UPDATE is a no-op on a clean DB.
UPDATE invoices
SET billing_entity_id = COALESCE(NULLIF(client_name, ''), id)
WHERE billing_entity_id = '';
-- ─── documents ─────────────────────────────────────────────────────────────
DO $$ BEGIN
ALTER TABLE documents
ADD CONSTRAINT documents_interest_id_fkey
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL;
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE documents VALIDATE CONSTRAINT documents_interest_id_fkey;
DO $$ BEGIN
ALTER TABLE documents
ADD CONSTRAINT documents_yacht_id_fkey
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL;
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE documents VALIDATE CONSTRAINT documents_yacht_id_fkey;
DO $$ BEGIN
ALTER TABLE documents
ADD CONSTRAINT documents_company_id_fkey
FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE SET NULL;
FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE documents VALIDATE CONSTRAINT documents_company_id_fkey;
DO $$ BEGIN
ALTER TABLE documents
ADD CONSTRAINT documents_reservation_id_fkey
FOREIGN KEY (reservation_id) REFERENCES berth_reservations(id) ON DELETE SET NULL;
FOREIGN KEY (reservation_id) REFERENCES berth_reservations(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE documents VALIDATE CONSTRAINT documents_reservation_id_fkey;
-- ─── files ─────────────────────────────────────────────────────────────────
-- files
DO $$ BEGIN
ALTER TABLE files
ADD CONSTRAINT files_yacht_id_fkey
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL;
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE files VALIDATE CONSTRAINT files_yacht_id_fkey;
DO $$ BEGIN
ALTER TABLE files
ADD CONSTRAINT files_company_id_fkey
FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE SET NULL;
FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE files VALIDATE CONSTRAINT files_company_id_fkey;
-- ─── interests ─────────────────────────────────────────────────────────────
-- interests (yacht_id is wired via relations only)
DO $$ BEGIN
ALTER TABLE interests
ADD CONSTRAINT interests_yacht_id_fkey
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL;
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE interests VALIDATE CONSTRAINT interests_yacht_id_fkey;
-- ─── reminders ─────────────────────────────────────────────────────────────
-- reminders
DO $$ BEGIN
ALTER TABLE reminders
ADD CONSTRAINT reminders_interest_id_fkey
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL;
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE reminders VALIDATE CONSTRAINT reminders_interest_id_fkey;
DO $$ BEGIN
ALTER TABLE reminders
ADD CONSTRAINT reminders_berth_id_fkey
FOREIGN KEY (berth_id) REFERENCES berths(id) ON DELETE SET NULL;
FOREIGN KEY (berth_id) REFERENCES berths(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE reminders VALIDATE CONSTRAINT reminders_berth_id_fkey;
-- ─── berth_waiting_list ────────────────────────────────────────────────────
-- berth_waiting_list
DO $$ BEGIN
ALTER TABLE berth_waiting_list
ADD CONSTRAINT berth_waiting_list_yacht_id_fkey
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL;
FOREIGN KEY (yacht_id) REFERENCES yachts(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE berth_waiting_list VALIDATE CONSTRAINT berth_waiting_list_yacht_id_fkey;
-- ─── form_submissions ──────────────────────────────────────────────────────
-- form_submissions
DO $$ BEGIN
ALTER TABLE form_submissions
ADD CONSTRAINT form_submissions_interest_id_fkey
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL;
FOREIGN KEY (interest_id) REFERENCES interests(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE form_submissions VALIDATE CONSTRAINT form_submissions_interest_id_fkey;
-- ─── Polymorphic CHECK round 2 ──────────────────────────────────────────────
-- 0036 covered yachts.current_owner_type and invoices.billing_entity_type.
@@ -93,26 +125,27 @@ EXCEPTION WHEN duplicate_object THEN NULL; END $$;
DO $$ BEGIN
ALTER TABLE yacht_ownership_history
ADD CONSTRAINT yacht_ownership_history_owner_type_chk
CHECK (owner_type IN ('client', 'company'));
CHECK (owner_type IN ('client', 'company')) NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE yacht_ownership_history VALIDATE CONSTRAINT yacht_ownership_history_owner_type_chk;
DO $$ BEGIN
ALTER TABLE document_sends
ADD CONSTRAINT document_sends_document_kind_chk
CHECK (document_kind IN ('berth_pdf', 'brochure'));
CHECK (document_kind IN ('berth_pdf', 'brochure')) NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE document_sends VALIDATE CONSTRAINT document_sends_document_kind_chk;
-- ─── invoices.billing_entity_id sanity check ───────────────────────────────
-- The schema declared notNull() with default('') which combined with the
-- 0036 type CHECK lets a row insert with billing_entity_type='client' and
-- billing_entity_id='' — the polymorphic resolver looks up the empty
-- string and returns null with no DB-level signal.
-- The pre-cleanup at the top of this migration backfilled empty strings
-- to clientName (or row id), so the VALIDATE step is now safe.
DO $$ BEGIN
ALTER TABLE invoices
ADD CONSTRAINT invoices_billing_entity_id_nonempty_chk
CHECK (billing_entity_id <> '');
CHECK (billing_entity_id <> '') NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE invoices VALIDATE CONSTRAINT invoices_billing_entity_id_nonempty_chk;
-- ─── clients.merged_into_client_id self-FK ─────────────────────────────────
-- Already nullable; populated when a client is soft-merged into another.
@@ -120,5 +153,6 @@ EXCEPTION WHEN duplicate_object THEN NULL; END $$;
DO $$ BEGIN
ALTER TABLE clients
ADD CONSTRAINT clients_merged_into_client_id_fkey
FOREIGN KEY (merged_into_client_id) REFERENCES clients(id) ON DELETE SET NULL;
FOREIGN KEY (merged_into_client_id) REFERENCES clients(id) ON DELETE SET NULL NOT VALID;
EXCEPTION WHEN duplicate_object THEN NULL; END $$;
ALTER TABLE clients VALIDATE CONSTRAINT clients_merged_into_client_id_fkey;

View File

@@ -3,8 +3,16 @@ import { timingSafeEqual } from 'crypto';
// Documenso (v1.13 + 2.x) authenticates outbound webhooks by sending the
// configured secret in plaintext via the `X-Documenso-Secret` header.
// There is no HMAC. Compare the provided value timing-safely to the env secret.
//
// An empty `expected` MUST always reject — without this guard,
// timingSafeEqual(0-bytes, 0-bytes) returns true, so a dev environment
// with a blank DOCUMENSO_WEBHOOK_SECRET would accept any request whose
// `X-Documenso-Secret` was also empty/missing. Same for blank per-port
// secret rows in `system_settings` (the per-port writer should never
// store an empty string but defense-in-depth here is cheap).
export function verifyDocumensoSecret(provided: string, expected: string): boolean {
if (!provided || provided.length !== expected.length) return false;
if (!provided || !expected) return false;
if (provided.length !== expected.length) return false;
try {
return timingSafeEqual(Buffer.from(provided), Buffer.from(expected));
} catch {

View File

@@ -206,7 +206,20 @@ export async function processReminderQueue(portId: string): Promise<void> {
fileId: documents.fileId,
})
.from(documents)
.leftJoin(documentTemplates, eq(documentTemplates.templateType, documents.documentType))
// CRITICAL: scope the join to the same port — `documentTemplates.templateType`
// is not unique across ports, so a leftJoin without `portId` produces a
// cartesian explosion (one output row per template-type match across
// every port). The downstream loop fires `documensoRemind` per row,
// which means the same signer in port A would receive N reminders on
// a single cron tick (once per port that defined a template of the
// same type). Audit follow-up after Tier 3 ship.
.leftJoin(
documentTemplates,
and(
eq(documentTemplates.templateType, documents.documentType),
eq(documentTemplates.portId, portId),
),
)
.where(
and(
eq(documents.portId, portId),

View File

@@ -780,18 +780,59 @@ export async function listDocumentEvents(documentId: string, portId: string) {
// ─── Webhook Handlers ─────────────────────────────────────────────────────────
/**
* Shared port-scoped lookup for inbound Documenso webhooks. Two ports
* sharing a Documenso instance — or migrating between instances with
* documentId reuse — would otherwise let `findFirst` return whichever
* row sorts first across tenants. When the route resolves a portId from
* the matched per-port webhook secret it threads it here; otherwise we
* fall back to a port-agnostic `findMany` and refuse to mutate when the
* lookup is ambiguous (mirrors the guard in handleDocumentExpired).
*
* Returns null when no document matches OR when the lookup is ambiguous
* across multiple ports without a resolved portId. Callers must treat
* null as "drop the event" (the cron sweep / next webhook will catch up
* once the data is consistent).
*/
async function resolveWebhookDocument(
documensoId: string,
portId: string | undefined,
): Promise<typeof documents.$inferSelect | null> {
if (portId) {
const doc = await db.query.documents.findFirst({
where: and(eq(documents.documensoId, documensoId), eq(documents.portId, portId)),
});
if (!doc) {
logger.warn({ documensoId, portId }, 'Document not found for webhook (port-scoped)');
return null;
}
return doc;
}
const matches = await db.query.documents.findMany({
where: eq(documents.documensoId, documensoId),
});
if (matches.length === 0) {
logger.warn({ documensoId }, 'Document not found for webhook');
return null;
}
if (matches.length > 1) {
logger.error(
{ documensoId, matchCount: matches.length, ports: matches.map((m) => m.portId) },
'Documenso webhook ambiguous across multiple ports — refusing to mutate',
);
return null;
}
return matches[0]!;
}
export async function handleRecipientSigned(eventData: {
documentId: string;
recipientEmail: string;
signatureHash?: string;
portId?: string;
}) {
const doc = await db.query.documents.findFirst({
where: eq(documents.documensoId, eventData.documentId),
});
if (!doc) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
// Update signer status
const [signer] = await db
@@ -826,7 +867,7 @@ export async function handleRecipientSigned(eventData: {
await db
.update(documents)
.set({ status: 'partially_signed', updatedAt: new Date() })
.where(eq(documents.id, doc.id));
.where(and(eq(documents.id, doc.id), eq(documents.portId, doc.portId)));
}
await db.insert(documentEvents).values({
@@ -843,14 +884,9 @@ export async function handleRecipientSigned(eventData: {
});
}
export async function handleDocumentCompleted(eventData: { documentId: string }) {
const doc = await db.query.documents.findFirst({
where: eq(documents.documensoId, eventData.documentId),
});
if (!doc) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
export async function handleDocumentCompleted(eventData: { documentId: string; portId?: string }) {
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
// BR-022: Download signed PDF and store in MinIO
const port = await db.query.ports.findFirst({ where: eq(ports.id, doc.portId) });
@@ -980,38 +1016,8 @@ export async function handleDocumentCompleted(eventData: { documentId: string })
}
export async function handleDocumentExpired(eventData: { documentId: string; portId?: string }) {
// Port-scoped lookup when the caller resolved a portId from the
// webhook signature. Two ports holding the same Documenso instance
// (or migrating between instances with id reuse) would otherwise
// share a documensoId across tenants, and findFirst would return
// whichever row sorted first — flipping a foreign-port document.
const matches = await db.query.documents.findMany({
where: eventData.portId
? and(eq(documents.documensoId, eventData.documentId), eq(documents.portId, eventData.portId))
: eq(documents.documensoId, eventData.documentId),
});
if (matches.length === 0) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
if (matches.length > 1 && !eventData.portId) {
// Cross-tenant ambiguity. Refuse to mutate without a resolved port —
// safer to drop the event (the cron expiry sweep will catch up) than
// flip the wrong document.
logger.error(
{
documensoId: eventData.documentId,
matchCount: matches.length,
ports: matches.map((m) => m.portId),
},
'Document expired webhook ambiguous across multiple ports — refusing to mutate',
);
return;
}
const doc = matches[0]!;
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
await db
.update(documents)
@@ -1038,14 +1044,10 @@ export async function handleDocumentOpened(eventData: {
documentId: string;
recipientEmail: string;
signatureHash?: string;
portId?: string;
}) {
const doc = await db.query.documents.findFirst({
where: eq(documents.documensoId, eventData.documentId),
});
if (!doc) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
const [signer] = await db
.select()
@@ -1075,14 +1077,10 @@ export async function handleDocumentRejected(eventData: {
documentId: string;
recipientEmail?: string;
signatureHash?: string;
portId?: string;
}) {
const doc = await db.query.documents.findFirst({
where: eq(documents.documensoId, eventData.documentId),
});
if (!doc) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
let signerId: string | null = null;
if (eventData.recipientEmail) {
@@ -1102,13 +1100,13 @@ export async function handleDocumentRejected(eventData: {
await db
.update(documents)
.set({ status: 'rejected', updatedAt: new Date() })
.where(eq(documents.id, doc.id));
.where(and(eq(documents.id, doc.id), eq(documents.portId, doc.portId)));
if (doc.interestId && doc.documentType === 'eoi') {
await db
.update(interests)
.set({ eoiStatus: 'rejected', updatedAt: new Date() })
.where(eq(interests.id, doc.interestId));
.where(and(eq(interests.id, doc.interestId), eq(interests.portId, doc.portId)));
}
await db.insert(documentEvents).values({
@@ -1128,25 +1126,21 @@ export async function handleDocumentRejected(eventData: {
export async function handleDocumentCancelled(eventData: {
documentId: string;
signatureHash?: string;
portId?: string;
}) {
const doc = await db.query.documents.findFirst({
where: eq(documents.documensoId, eventData.documentId),
});
if (!doc) {
logger.warn({ documensoId: eventData.documentId }, 'Document not found for webhook');
return;
}
const doc = await resolveWebhookDocument(eventData.documentId, eventData.portId);
if (!doc) return;
await db
.update(documents)
.set({ status: 'cancelled', updatedAt: new Date() })
.where(eq(documents.id, doc.id));
.where(and(eq(documents.id, doc.id), eq(documents.portId, doc.portId)));
if (doc.interestId && doc.documentType === 'eoi') {
await db
.update(interests)
.set({ eoiStatus: 'cancelled', updatedAt: new Date() })
.where(eq(interests.id, doc.interestId));
.where(and(eq(interests.id, doc.interestId), eq(interests.portId, doc.portId)));
}
await db.insert(documentEvents).values({

View File

@@ -225,9 +225,13 @@ export async function listDocumensoWebhookSecrets(): Promise<DocumensoSecretEntr
if (typeof row.value !== 'string' || !row.value || !row.portId) continue;
out.push({ portId: row.portId, secret: row.value });
}
// Always include the global env secret as a fallback (null portId means
// "no per-port resolution" — preserves single-tenant compatibility).
out.push({ portId: null, secret: env.DOCUMENSO_WEBHOOK_SECRET });
// Append the global env secret as a fallback ONLY when it's a real,
// non-empty value. An empty env secret would otherwise match an empty
// X-Documenso-Secret header (verifyDocumensoSecret guards this too,
// but skipping the entry here keeps the matched-secret loop honest).
if (env.DOCUMENSO_WEBHOOK_SECRET) {
out.push({ portId: null, secret: env.DOCUMENSO_WEBHOOK_SECRET });
}
return out;
}