fix(audit-final): pre-merge hardening + expense receipt UI

Final audit pass on feat/berth-recommender (3 parallel Opus agents)
caught 5 critical and ~12 high-severity findings. All addressed in-branch;
medium/low items deferred to docs/audit-final-deferred.md.

Critical:
- Add filesystem-backend PUT handler at /api/storage/[token] so
  presigned uploads stop 405-ing in filesystem mode (every browser-driven
  berth-PDF + brochure upload was broken). Same token-verify + replay
  protection as GET, plus magic-byte gate when c=application/pdf.
- Forward req.signal into streamExpensePdf so an aborted 1000-receipt
  export no longer keeps grinding for minutes.
- Strengthen Content-Disposition filename sanitization: \s matches CR/LF
  which would let documentName forge headers; restrict to [\w. -]+ and
  add filename* RFC 5987 fallback.
- Lock public berths feed behind an explicit slug allowlist instead of
  ?portSlug= enumeration.
- Reject cross-port interest_berths upserts (defense-in-depth on top of
  the recommender SQL port filter).

High:
- Recommender: width-only feasibility now caps length via L/W ratio so a
  200ft berth doesn't surface for a 30ft beam request; total_interest_count
  filters out junction rows whose interest is in another port.
- Mooring normalization follow-up migration (0034) catches un-hyphenated
  padded forms (A01) the original 0024 WHERE missed.
- Send-out rate limit moved AFTER validation and scoped per-(port, user)
  so typos don't burn a slot and a multi-port rep can't be DoS'd by
  another tenant.
- Default-brochure path now blocks an archived row from sneaking through
  the partial unique index.
- NocoDB import --update-snapshot honoured under --dry-run so reps can
  refresh the seed JSON without committing DB writes.
- PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when
  expenseIds are passed (was bypassed); flag rate-unavailable rows with
  an amber footer instead of silently treating them as 1:1; skip the
  USD->EUR chain when source already matches target.
- expense-form-dialog: revokeObjectURL captures the URL in the closure
  instead of revoking the still-displayed one; reset upload state on
  close.
- scan/page: handleClearReceipt resets in-flight scan/upload mutations;
  Save disabled while upload pending.
- updateExpense re-asserts receipt-or-acknowledgement at the merged
  row so PATCH can't slip past the create-time refine.

Plus the in-progress receipt upload UI for the expense form dialog
(receipt picker + "I have no receipt" checkbox + warning banner) and
a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration.

Includes the canonical plan doc (referenced in CLAUDE.md), the handoff
prompt, and a deferred-findings index for follow-up issues.

1163/1163 vitest passing. Typecheck clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Matt Ciaccio
2026-05-05 05:11:26 +02:00
parent 014bbe1923
commit 180912ba9f
20 changed files with 2015 additions and 101 deletions

View File

@@ -16,6 +16,10 @@ import { toPublicBerth } from '@/lib/services/public-berths';
* ("A1", "B12") - Phase 0 normalized the entire CRM dataset.
*/
// Hard-coded allowlist for the public read-only feed. Adding a port here
// is a deliberate decision (not silent enumeration via ?portSlug=), so a
// future private tenant can't be exposed by accident.
const PUBLIC_PORT_SLUGS = new Set(['port-nimara']);
const DEFAULT_PUBLIC_PORT_SLUG = 'port-nimara';
const RESPONSE_HEADERS = {
'cache-control': 'public, s-maxage=300, stale-while-revalidate=60',
@@ -30,7 +34,14 @@ export async function GET(
): Promise<Response> {
const { mooringNumber } = await ctx.params;
const url = new URL(request.url);
const portSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG;
const requestedSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG;
if (!PUBLIC_PORT_SLUGS.has(requestedSlug)) {
return NextResponse.json(
{ error: 'port is not part of the public berths feed', portSlug: requestedSlug },
{ status: 404, headers: { 'cache-control': 'no-store' } },
);
}
const portSlug = requestedSlug;
// Reject obviously malformed mooring numbers up front so cache poisoning
// / random-URL probing returns 400 rather than 404 (saves a DB hit).

View File

@@ -25,6 +25,10 @@ import { toPublicBerth, type PublicBerth } from '@/lib/services/public-berths';
* them up.
*/
// Hard-coded allowlist for the public read-only feed. Adding a port here
// is a deliberate decision (not silent enumeration via ?portSlug=), so a
// future private tenant can't be exposed by accident.
const PUBLIC_PORT_SLUGS = new Set(['port-nimara']);
const DEFAULT_PUBLIC_PORT_SLUG = 'port-nimara';
const RESPONSE_HEADERS = {
@@ -45,7 +49,14 @@ interface ListResponse {
export async function GET(request: Request): Promise<Response> {
const url = new URL(request.url);
const portSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG;
const requestedSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG;
if (!PUBLIC_PORT_SLUGS.has(requestedSlug)) {
return NextResponse.json(
{ error: 'port is not part of the public berths feed', portSlug: requestedSlug },
{ status: 404, headers: { 'cache-control': 'no-store' } },
);
}
const portSlug = requestedSlug;
const [port] = await db
.select({ id: ports.id })

View File

@@ -20,10 +20,12 @@ import { Readable } from 'node:stream';
import { NextRequest, NextResponse } from 'next/server';
import { MAX_FILE_SIZE } from '@/lib/constants/file-validation';
import { logger } from '@/lib/logger';
import { redis } from '@/lib/redis';
import { FilesystemBackend, getStorageBackend } from '@/lib/storage';
import { verifyProxyToken } from '@/lib/storage/filesystem';
import { isPdfMagic } from '@/lib/services/berth-pdf-parser';
export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';
@@ -115,3 +117,120 @@ export async function GET(
return new NextResponse(webStream, { status: 200, headers });
}
/**
* Filesystem-backend upload proxy. The presigned URL minted by
* `FilesystemBackend.presignUpload` points here. Without this handler the
* browser-driven berth-PDF / brochure uploads would 405 in filesystem
* deployments — the entire pluggable-storage abstraction relied on the
* GET-only counterpart for downloads.
*
* Same token-verify + single-use replay protection as GET, plus:
* - Hard size cap (rejects oversized bodies before any disk I/O).
* - Magic-byte check when the issuer declared content-type=application/pdf
* (matches the §14.6 §6c/§7c invariant: every upload path verifies
* bytes server-side, not just at the client).
*/
export async function PUT(
req: NextRequest,
ctx: { params: Promise<{ token: string }> },
): Promise<NextResponse> {
const { token } = await ctx.params;
const backend = await getStorageBackend();
if (!(backend instanceof FilesystemBackend)) {
return NextResponse.json(
{ error: 'Storage proxy is only available in filesystem mode' },
{ status: 404 },
);
}
const result = verifyProxyToken(token, backend.getHmacSecret());
if (!result.ok) {
logger.warn({ reason: result.reason }, 'Storage proxy upload token rejected');
return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 });
}
const { payload } = result;
// Separate replay namespace from GET so a token can validly serve one
// upload AND one download (the issuer only mints the second), but a
// PUT cannot be replayed against itself.
const replayKey = `storage:proxy:put:${token.split('.')[0]}`;
const remainingSeconds = Math.max(
REPLAY_TTL_FLOOR_SECONDS,
Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60),
);
const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX');
if (setOk !== 'OK') {
logger.warn({ key: payload.k }, 'Storage proxy upload token replay rejected');
return NextResponse.json({ error: 'Token already used' }, { status: 403 });
}
// Pre-flight size check via Content-Length so a malicious caller can't
// exhaust disk by streaming hundreds of MB before we look at the body.
const contentLengthHeader = req.headers.get('content-length');
const contentLength = contentLengthHeader ? Number(contentLengthHeader) : NaN;
if (Number.isFinite(contentLength) && contentLength > MAX_FILE_SIZE) {
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap (Content-Length: ${contentLength})` },
{ status: 413 },
);
}
if (!req.body) {
return NextResponse.json({ error: 'Empty body' }, { status: 400 });
}
// Read the body into a buffer with a hard cap. Filesystem deployments are
// small-tenant (single-node only — see FilesystemBackend boot guard) so
// 50 MB ceiling fits comfortably in heap; no streaming needed.
let buffer: Buffer;
try {
const chunks: Buffer[] = [];
let total = 0;
const reader = req.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
total += value.byteLength;
if (total > MAX_FILE_SIZE) {
try {
await reader.cancel();
} catch {
/* ignore */
}
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap` },
{ status: 413 },
);
}
chunks.push(Buffer.from(value));
}
buffer = Buffer.concat(chunks);
} catch (err) {
logger.warn({ err, key: payload.k }, 'Storage proxy upload read failed');
return NextResponse.json({ error: 'Upload read failed' }, { status: 400 });
}
// Magic-byte gate: when the token was minted with `c=application/pdf`
// (the only consumer today — berth PDFs + brochures), refuse anything
// that isn't actually a PDF. Mirrors the post-upload check in
// berth-pdf.service.ts so the two paths behave identically.
if (payload.c === 'application/pdf' && !isPdfMagic(buffer)) {
return NextResponse.json(
{ error: 'Uploaded file failed PDF magic-byte check (does not start with %PDF-).' },
{ status: 400 },
);
}
try {
await backend.put(payload.k, buffer, {
contentType: payload.c ?? 'application/octet-stream',
});
} catch (err) {
logger.error({ err, key: payload.k }, 'Storage proxy upload write failed');
return NextResponse.json({ error: 'Upload write failed' }, { status: 500 });
}
return NextResponse.json({ ok: true, key: payload.k, sizeBytes: buffer.length }, { status: 200 });
}

View File

@@ -49,18 +49,25 @@ export const POST = withAuth(
}
: undefined,
options: input.options,
// Forward the request abort signal so the streaming PDF builder
// stops fetching/resizing receipts the moment the client disconnects
// (otherwise an aborted 1000-receipt export keeps the worker busy
// for minutes after the user navigated away — see audit finding 2).
signal: req.signal,
});
// NextResponse extends Response; passing a ReadableStream as the
// body keeps the streaming semantics. The wrapper's RouteHandler
// type expects NextResponse so we use it explicitly.
// Content-Disposition filename hardening: the validator caps length
// but `\s` matches CR/LF, which would let an attacker forge response
// headers. Strip everything that isn't word/space/dot/dash, AND set
// the RFC 5987 `filename*` so a UTF-8 body still survives.
const safeFilename = suggestedFilename.replace(/[^\w. \-]+/g, '_');
const disposition = `attachment; filename="${safeFilename}"; filename*=UTF-8''${encodeURIComponent(suggestedFilename)}`;
return new NextResponse(stream, {
status: 200,
headers: {
'Content-Type': 'application/pdf',
'Content-Disposition': `attachment; filename="${suggestedFilename}"`,
// The PDF is generated on the fly per-request and includes
// potentially-sensitive expense data; never cache.
'Content-Disposition': disposition,
'Cache-Control': 'private, no-store, max-age=0',
'X-Content-Type-Options': 'nosniff',
},