feat(errors): platform-wide request ids + error codes + admin inspector

End-to-end error-handling overhaul. A user hitting any failure now sees
a plain-text message + stable error code + reference id. A super admin
can paste the id into /admin/errors/<id> for the full request shape,
sanitized body, error stack, and a heuristic likely-cause hint.

REQUEST CONTEXT (AsyncLocalStorage)
- src/lib/request-context.ts mints a per-request frame carrying
  requestId + portId + userId + method + path + start timestamp.
- withAuth wraps every authenticated handler in runWithRequestContext
  and accepts an upstream X-Request-Id header (validated shape) or
  generates a fresh UUID. The id ALWAYS leaves on the X-Request-Id
  response header, including early-return 401/403/4xx paths.
- Pino logger reads from the same context via mixin — every log
  line emitted during the request automatically carries the ids
  with no per-call threading.

ERROR CODE REGISTRY
- src/lib/error-codes.ts defines stable DOMAIN_REASON codes with
  HTTP status + plain-text user-facing message (no jargon, written
  for the rep on the phone with a customer).
- New CodedError class wraps a registered code + optional
  internalMessage (admin-only — never sent to client).
- Existing AppError subclasses got plain-text default rewrites so
  legacy throw sites improve immediately without migration.
- High-impact services migrated to specific codes:
  expenses (RECEIPT_REQUIRED, INVOICE_LINKED), interest-berths
  (CROSS_PORT_LINK_REJECTED), berth-pdf (PDF_MAGIC_BYTE / PDF_EMPTY /
  PDF_TOO_LARGE / VERSION_ALREADY_CURRENT), recommender
  (INTEREST_PORT_MISMATCH).

ERROR ENVELOPE
- errorResponse always sets X-Request-Id header + requestId field.
- 5xx responses include a "Quote error ID …" friendly line.
- 4xx kept clean (validation, permission, not-found don't pollute
  the inspector — they're already in audit log).

PERSISTENCE (error_events table, migration 0040)
- One row per 5xx, keyed on requestId, with method/path/status/error
  name+message/stack head (4KB cap)/sanitized body excerpt (1KB cap;
  password/token/secret/etc keys redacted)/duration/IP/UA/metadata.
- captureErrorEvent extracts Postgres SQLSTATE/severity/cause.code
  so the classifier can recognize FK / unique / NOT NULL / schema-
  drift violations.
- Failure to persist is logged-not-thrown.

LIKELY-CULPRIT CLASSIFIER (src/lib/error-classifier.ts)
- 4-pass heuristic (first match wins):
  1. Postgres SQLSTATE → human reason (23503 FK, 23505 unique,
     42703 schema drift, 53300 connection limit, …)
  2. Error class name (AbortError, TimeoutError, FetchError,
     ZodError)
  3. Stack-path patterns (/lib/storage/, /lib/email/, documenso,
     openai|claude, /queue/workers/)
  4. Free-text message keywords (econnrefused, rate limit, timeout,
     unauthorized|invalid api key)
- Returns { label, hint, subsystem } for the inspector badge.

CLIENT SIDE
- apiFetch throws structured ApiError with message + code + requestId
  + details + retryAfter.
- toastError() helper renders the standard 3-line toast:
  plain message / Error code: X / Reference ID: Y [Copy ID].

ADMIN INSPECTOR
- /<port>/admin/errors lists captured 5xx with status badge + path +
  likely-culprit badge + truncated message + reference id. Filter by
  status code; auto-refresh via TanStack Query.
- /<port>/admin/errors/<requestId> deep-dive: request shape, full
  error name+message+stack, sanitized body excerpt, raw metadata,
  registered-code lookup (so admin can compare to what user saw),
  likely-culprit hint with subsystem tag.
- /<port>/admin/errors/codes is the in-app code reference page —
  every registered code grouped by domain prefix, searchable, with
  HTTP status + user message inline. Linked from inspector header
  so admins can flip to it while triaging.
- Permission: admin.view_audit_log. Super admins see all ports;
  regular admins port-scoped.
- system-monitoring dashboard now surfaces error_events alongside
  permission_denied audit + queue failed jobs (RecentError gains
  source: 'request' variant).

DOCS
- docs/error-handling.md walks through coded errors, plain-text
  message guidelines, client toasting, admin inspector usage,
  persistence rules, classifier internals, pruning, and the
  legacy → CodedError migration path.

MIGRATION SAFETY
- Audit confirmed all 41 migrations (0000-0040) apply cleanly in
  journal order against an empty DB. 0040 references ports(id)
  which exists from 0000. 0035/0038 don't deadlock under sequential
  psql -f. Removed redundant idx_ds_sent_by from 0038 (created in
  0037).

Tests: 1168/1168 vitest passing. tsc clean.
- security-error-responses tests updated for plain-text messages
  + new optional response keys (code/requestId/message).
- berth-pdf-versions tests assert stable error codes via
  toMatchObject({ code }) rather than message regex.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Matt Ciaccio
2026-05-05 14:12:59 +02:00
parent c4a41d5f5b
commit 4723994bdc
26 changed files with 2027 additions and 169 deletions

View File

@@ -18,7 +18,7 @@ import { and, desc, eq, isNull, max, sql } from 'drizzle-orm';
import { db } from '@/lib/db';
import { berths, berthPdfVersions } from '@/lib/db/schema/berths';
import { systemSettings } from '@/lib/db/schema/system';
import { ConflictError, NotFoundError, ValidationError } from '@/lib/errors';
import { CodedError, ConflictError, NotFoundError, ValidationError } from '@/lib/errors';
import { logger } from '@/lib/logger';
import { getStorageBackend } from '@/lib/storage';
@@ -218,15 +218,13 @@ export async function uploadBerthPdf(args: UploadBerthPdfArgs): Promise<UploadBe
if (!isPdfMagic(buffer)) {
// Best-effort cleanup if the storage already has a partial.
if (args.storageKey) await backend.delete(args.storageKey).catch(() => undefined);
throw new ValidationError(
'Uploaded file failed PDF magic-byte check (does not start with %PDF-).',
);
throw new CodedError('BERTHS_PDF_MAGIC_BYTE');
}
if (buffer.length === 0) throw new ValidationError('Uploaded PDF is empty (0 bytes).');
if (buffer.length === 0) throw new CodedError('BERTHS_PDF_EMPTY');
if (buffer.length > maxBytes) {
throw new ValidationError(
`PDF exceeds ${maxMb} MB upload cap (got ${(buffer.length / 1024 / 1024).toFixed(1)} MB).`,
);
throw new CodedError('BERTHS_PDF_TOO_LARGE', {
internalMessage: `PDF exceeds ${maxMb} MB upload cap (got ${(buffer.length / 1024 / 1024).toFixed(1)} MB).`,
});
}
const written = await backend.put(storageKey, buffer, { contentType: 'application/pdf' });
storageKey = written.key;
@@ -240,13 +238,13 @@ export async function uploadBerthPdf(args: UploadBerthPdfArgs): Promise<UploadBe
}
if (head.sizeBytes === 0) {
await backend.delete(args.storageKey).catch(() => undefined);
throw new ValidationError('Uploaded PDF is empty (0 bytes).');
throw new CodedError('BERTHS_PDF_EMPTY');
}
if (head.sizeBytes > maxBytes) {
await backend.delete(args.storageKey).catch(() => undefined);
throw new ValidationError(
`PDF exceeds ${maxMb} MB upload cap (got ${(head.sizeBytes / 1024 / 1024).toFixed(1)} MB).`,
);
throw new CodedError('BERTHS_PDF_TOO_LARGE', {
internalMessage: `PDF exceeds ${maxMb} MB upload cap (got ${(head.sizeBytes / 1024 / 1024).toFixed(1)} MB).`,
});
}
if (head.contentType !== 'application/pdf' && head.contentType !== 'application/octet-stream') {
await backend.delete(args.storageKey).catch(() => undefined);
@@ -607,7 +605,7 @@ export async function rollbackToVersion(
if (!berthRow) throw new NotFoundError('Berth');
if (berthRow.currentPdfVersionId === versionId) {
throw new ConflictError('That version is already current; rollback is a no-op.');
throw new CodedError('BERTHS_VERSION_ALREADY_CURRENT');
}
await db

View File

@@ -34,6 +34,7 @@ import { and, eq, inArray, sql } from 'drizzle-orm';
import { db } from '@/lib/db';
import { systemSettings } from '@/lib/db/schema/system';
import { interests } from '@/lib/db/schema/interests';
import { CodedError } from '@/lib/errors';
// ─── Settings ──────────────────────────────────────────────────────────────
@@ -395,9 +396,9 @@ export async function recommendBerths(args: RecommendBerthsArgs): Promise<Recomm
if (!interestInput) return [];
if (interestInput.portId !== args.portId) {
// Defensive: caller passed a port that doesn't own this interest.
throw new Error(
`Recommender: interest ${args.interestId} belongs to port ${interestInput.portId}, not ${args.portId}`,
);
throw new CodedError('RECOMMENDER_INTEREST_PORT_MISMATCH', {
internalMessage: `interest ${args.interestId} belongs to port ${interestInput.portId}, not ${args.portId}`,
});
}
const oversizePct = args.maxOversizePct ?? settings.maxOversizePct;

View File

@@ -0,0 +1,172 @@
/**
* Error event capture + retrieval.
*
* `captureErrorEvent(...)` is called from `errorResponse(...)` whenever
* an unhandled (5xx) error fires inside a route handler. It pulls the
* request context from AsyncLocalStorage, sanitizes the payload, and
* inserts one row into `error_events`. Failure to write must NEVER
* throw — the caller is already in the error path.
*
* `listErrorEvents` / `getErrorEventById` back the super-admin inspector.
*/
import { and, desc, eq, gte, lte } from 'drizzle-orm';
import { db } from '@/lib/db';
import { errorEvents, type ErrorEvent } from '@/lib/db/schema/system';
import { logger } from '@/lib/logger';
import { getRequestContext } from '@/lib/request-context';
const STACK_MAX_BYTES = 4 * 1024;
const BODY_MAX_BYTES = 1 * 1024;
/** Keys whose values are never persisted to the body excerpt. */
const SENSITIVE_KEYS = new Set([
'password',
'newPassword',
'oldPassword',
'token',
'secret',
'apiKey',
'accessKey',
'secretKey',
'creditCard',
'cardNumber',
'cvv',
'ssn',
'authorization',
]);
/** Drop sensitive keys + cap the JSON length. */
function sanitizeBody(body: unknown): string | null {
if (body === null || body === undefined) return null;
let cloned: unknown;
try {
cloned = JSON.parse(JSON.stringify(body));
} catch {
return null;
}
function walk(value: unknown): unknown {
if (Array.isArray(value)) return value.map(walk);
if (value && typeof value === 'object') {
const out: Record<string, unknown> = {};
for (const [k, v] of Object.entries(value)) {
if (SENSITIVE_KEYS.has(k)) {
out[k] = '[REDACTED]';
} else {
out[k] = walk(v);
}
}
return out;
}
return value;
}
const sanitized = walk(cloned);
let serialized: string;
try {
serialized = JSON.stringify(sanitized);
} catch {
return null;
}
if (Buffer.byteLength(serialized, 'utf8') > BODY_MAX_BYTES) {
serialized = serialized.slice(0, BODY_MAX_BYTES) + '…[truncated]';
}
return serialized;
}
interface CaptureArgs {
statusCode: number;
error: unknown;
/** Optional structured metadata (e.g. zod issues parsed from a ZodError). */
metadata?: Record<string, unknown>;
/** Sanitized request body (already JSON-serializable). Optional. */
body?: unknown;
}
/**
* Persist an error_events row tied to the active request context.
* Best-effort — silently swallows any DB failure (the caller is
* already returning the user an error response; we do NOT want to
* mask the original error with a logging-pipeline failure).
*/
export async function captureErrorEvent(args: CaptureArgs): Promise<void> {
const ctx = getRequestContext();
if (!ctx) {
// Outside a request context (e.g. queue worker). Log but skip — the
// queue has its own failure-capture in BullMQ.
return;
}
try {
const err = args.error;
const errorName = err instanceof Error ? err.name : typeof err;
const errorMessage = err instanceof Error ? err.message : err === undefined ? '' : String(err);
const stack = err instanceof Error && err.stack ? err.stack.slice(0, STACK_MAX_BYTES) : null;
const durationMs = Date.now() - ctx.startedAt;
// Pull through any well-known fields the upstream library decorated
// onto the error — Postgres driver uses `code` (SQLSTATE) and
// `severity`, fetch errors carry `cause.code`, etc. The classifier
// reads from `metadata.code` to drive the "likely culprit" badge.
const enriched: Record<string, unknown> = { ...(args.metadata ?? {}) };
if (err && typeof err === 'object') {
const e = err as { code?: unknown; severity?: unknown; cause?: { code?: unknown } };
if (typeof e.code === 'string') enriched.code = e.code;
if (typeof e.severity === 'string') enriched.severity = e.severity;
if (e.cause && typeof e.cause === 'object' && typeof e.cause.code === 'string') {
enriched.causeCode = e.cause.code;
}
}
await db
.insert(errorEvents)
.values({
requestId: ctx.requestId,
portId: ctx.portId || null,
userId: ctx.userId || null,
statusCode: args.statusCode,
method: ctx.method,
path: ctx.path,
errorName,
errorMessage,
errorStack: stack,
requestBodyExcerpt: sanitizeBody(args.body),
metadata: enriched,
durationMs,
})
.onConflictDoNothing();
} catch (writeErr) {
// Logged but never thrown — the caller is in the error path already.
logger.error({ err: writeErr }, 'Failed to persist error_events row');
}
}
export interface ListErrorEventsFilter {
portId?: string;
statusCode?: number;
/** ISO date strings; defaults to last 7 days. */
from?: string;
to?: string;
limit?: number;
}
export async function listErrorEvents(filter: ListErrorEventsFilter): Promise<ErrorEvent[]> {
const conditions = [];
if (filter.portId) conditions.push(eq(errorEvents.portId, filter.portId));
if (filter.statusCode) conditions.push(eq(errorEvents.statusCode, filter.statusCode));
if (filter.from) conditions.push(gte(errorEvents.createdAt, new Date(filter.from)));
if (filter.to) conditions.push(lte(errorEvents.createdAt, new Date(filter.to)));
return db
.select()
.from(errorEvents)
.where(conditions.length ? and(...conditions) : undefined)
.orderBy(desc(errorEvents.createdAt))
.limit(filter.limit ?? 100);
}
export async function getErrorEventById(requestId: string): Promise<ErrorEvent | null> {
const row = await db.query.errorEvents.findFirst({
where: eq(errorEvents.requestId, requestId),
});
return row ?? null;
}

View File

@@ -7,7 +7,7 @@ import { buildListQuery } from '@/lib/db/query-builder';
import { createAuditLog, type AuditMeta } from '@/lib/audit';
import { diffEntity } from '@/lib/entity-diff';
import { softDelete, restore } from '@/lib/db/utils';
import { NotFoundError, ConflictError } from '@/lib/errors';
import { CodedError, NotFoundError } from '@/lib/errors';
import { emitToRoom } from '@/lib/socket/server';
import { convert } from '@/lib/services/currency';
import { logger } from '@/lib/logger';
@@ -213,9 +213,7 @@ export async function updateExpense(
: existing.noReceiptAcknowledged;
const hasReceipts = Array.isArray(mergedReceiptIds) && mergedReceiptIds.length > 0;
if (!hasReceipts && !mergedAck) {
throw new ConflictError(
'Expense must either link a receipt file or acknowledge the no-receipt warning.',
);
throw new CodedError('EXPENSES_RECEIPT_REQUIRED');
}
const updateData: Record<string, unknown> = { ...data, updatedAt: new Date() };
@@ -292,7 +290,7 @@ export async function archiveExpense(id: string, portId: string, meta: AuditMeta
.limit(1);
if (linkedInvoice.length > 0) {
throw new ConflictError('Cannot archive expense linked to a non-draft invoice');
throw new CodedError('EXPENSES_INVOICE_LINKED');
}
await softDelete(expenses, expenses.id, id);

View File

@@ -21,7 +21,7 @@ import { and, desc, eq, inArray } from 'drizzle-orm';
import { db } from '@/lib/db';
import { interestBerths, interests, type InterestBerth } from '@/lib/db/schema/interests';
import { berths } from '@/lib/db/schema/berths';
import { ValidationError } from '@/lib/errors';
import { CodedError } from '@/lib/errors';
type DbOrTx = typeof db | Parameters<Parameters<typeof db.transaction>[0]>[0];
@@ -215,7 +215,9 @@ export async function upsertInterestBerthTx(
.limit(1);
const side = sides[0];
if (side && side.interestPortId !== side.berthPortId) {
throw new ValidationError('Cannot link an interest and a berth from different ports.');
throw new CodedError('CROSS_PORT_LINK_REJECTED', {
internalMessage: `interest ${interestId} (port ${side.interestPortId}) ↔ berth ${berthId} (port ${side.berthPortId})`,
});
}
if (opts.isPrimary === true) {

View File

@@ -1,5 +1,5 @@
import { db } from '@/lib/db';
import { auditLogs } from '@/lib/db/schema';
import { auditLogs, errorEvents } from '@/lib/db/schema';
import { redis } from '@/lib/redis';
import { minioClient } from '@/lib/minio/index';
import { getQueue, QUEUE_CONFIGS, type QueueName } from '@/lib/queue';
@@ -56,10 +56,17 @@ export interface ConnectionStatus {
export interface RecentError {
id: string;
source: 'audit' | 'queue';
source: 'audit' | 'queue' | 'request';
message: string;
timestamp: Date;
metadata?: Record<string, unknown>;
/** Set for `source: 'request'` rows so the UI can deep-link to
* /admin/errors/<requestId>. */
requestId?: string;
/** Set for `source: 'request'` rows. */
statusCode?: number;
/** Set for `source: 'request'` rows. */
errorCode?: string | null;
}
// ─── Timeout helper ───────────────────────────────────────────────────────────
@@ -364,8 +371,42 @@ export async function getRecentErrors(limit = 20): Promise<RecentError[]> {
.filter((r): r is PromiseFulfilledResult<RecentError[]> => r.status === 'fulfilled')
.flatMap((r) => r.value);
// Captured 5xx requests from the per-request error_events table —
// this is the deepest source: full stack head + body excerpt + path.
// The dedicated /admin/errors page paginates this; here we surface
// the most recent for the dashboard.
const requestErrorRows = await db
.select({
requestId: errorEvents.requestId,
statusCode: errorEvents.statusCode,
method: errorEvents.method,
path: errorEvents.path,
errorName: errorEvents.errorName,
errorMessage: errorEvents.errorMessage,
metadata: errorEvents.metadata,
createdAt: errorEvents.createdAt,
})
.from(errorEvents)
.orderBy(desc(errorEvents.createdAt))
.limit(limit);
const requestErrors: RecentError[] = requestErrorRows.map((row) => {
const meta = (row.metadata as Record<string, unknown>) ?? {};
return {
id: row.requestId,
source: 'request' as const,
message:
`${row.method} ${row.path}${row.statusCode} ${row.errorMessage ?? row.errorName ?? ''}`.trim(),
timestamp: row.createdAt,
metadata: meta,
requestId: row.requestId,
statusCode: row.statusCode,
errorCode: typeof meta.code === 'string' ? meta.code : null,
};
});
// Merge and sort combined list by timestamp descending
const combined = [...auditResults, ...queueErrors].sort(
const combined = [...auditResults, ...queueErrors, ...requestErrors].sort(
(a, b) => b.timestamp.getTime() - a.timestamp.getTime(),
);