Files
pn-new-crm/src/app/api/storage/[token]/route.ts
Matt Ciaccio ade4c9e77d fix(audit-v2): platform-wide post-merge hardening across 5 domains
Five-domain audit (security, routes, DB, integrations, UI/UX) ran after
the cf37d09 merge. Critical + high-impact items landed here; deferred
medium/low items indexed in docs/audit-final-deferred.md (now organised
into a "Audit-final v2" section).

Security:
- Storage proxy tokens now bind to op (`'get'` vs `'put'`). A long-lived
  download URL minted by `presignDownload` for an emailed brochure can no
  longer be replayed against the proxy PUT to overwrite the original
  storage object. `verifyProxyToken` requires `expectedOp` and rejects
  mismatches; legacy tokens missing `op` fail-closed. Regression tests
  added.
- Markdown email merge values are now markdown-escaped (`[`, `]`, `(`,
  `)`, `*`, `_`, `\`, backticks, braces) before substitution into the
  rep-authored body. A malicious value like `[click here](https://evil)`
  stored in `client.fullName` no longer survives `escapeHtml` to render
  as a real `<a href>` in the outbound email. Phishing-via-merge-field
  closed; regression tests added.
- Middleware now performs an Origin/Referer check on
  POST/PUT/PATCH/DELETE to `/api/v1/**`. Defense-in-depth on top of
  better-auth's SameSite=Lax cookie. Webhooks/public/auth/portal routes
  exempt as they don't carry the session cookie.

Routes:
- Template management routes were calling `withPermission('documents',
  'manage', ...)` — but `documents` doesn't have a `manage` action. The
  registry has `document_templates.manage`. Every non-superadmin was
  getting 403'd on the seven template endpoints. Fixed across the
  /admin/templates surface.
- Custom-fields permission resource is hardcoded to `clients` regardless
  of which entity (yacht/company/etc.) the values belong to. Documented
  as deferred (requires per-entity routes).

DB:
- documentSends: every parent FK (client_id, interest_id, berth_id,
  brochure_id, brochure_version_id) now uses ON DELETE SET NULL so the
  audit trail outlasts hard-deletes. The denormalized columns
  (recipient_email, document_kind, body_markdown, from_address) were
  added precisely for this. Migration 0035.
- Polymorphic discriminators on yachts.current_owner_type and
  invoices.billing_entity_type now have CHECK constraints — typos like
  `'clients'` vs `'client'` were silently inserting unreachable rows
  before. Migration 0036.

Integrations:
- Email attachment resolution (`src/lib/email/index.ts`) was importing
  MinIO directly instead of `getStorageBackend()`. Filesystem-backend
  deployments would have broken every email-with-attachment send. Now
  routes through the pluggable abstraction per CLAUDE.md.
- Documenso DOCUMENT_OPENED webhook filter relaxed: v2 may omit
  `readStatus` or send lowercase, so an event that was the SIGNAL of an
  open was being silently dropped. Now treats any recipient on a
  DOCUMENT_OPENED event as opened.

UI/UX:
- Expense detail used to render `receiptFileIds` as opaque UUID badges —
  reps couldn't view the receipt they uploaded. Now renders an image
  thumbnail (via `/api/v1/files/[id]/preview`) plus a Download link for
  PDFs. Closed the "where's my receipt?" loop in the expense flow.
- Expense detail Edit + Archive buttons now `<PermissionGate>` and the
  archive mutation surfaces success/error toasts instead of silent 403s.
- Brochures admin: setDefault/archive/create mutations now have onError
  toasts (only onSuccess existed before).
- Removed broken bulk-upload link in scan/page (route doesn't exist;
  used a raw `<a>` triggering a full reload to a 404).

Test status: 1168/1168 vitest passing. tsc clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:51:39 +02:00

237 lines
9.1 KiB
TypeScript

/**
* Filesystem-backend download proxy.
*
* The `FilesystemBackend.presignDownload(...)` returns a CRM-internal URL of
* the form `/api/storage/<hmac-signed-token>`. This route verifies the HMAC,
* checks expiry, enforces single-use via a short Redis cache, then streams
* the file out with explicit `Content-Type` + `Content-Disposition`.
*
* §14.9a mitigations exercised here:
* - HMAC verification (timingSafeEqual via filesystem.verifyProxyToken)
* - expiry check (token includes `e` epoch seconds)
* - single-use replay protection via short Redis SET-NX
* - Node runtime only (no edge); explicit headers so Next.js doesn't try to
* process the bytes (no image optimization, no streaming transforms)
*/
import { createReadStream } from 'node:fs';
import * as fs from 'node:fs/promises';
import { Readable } from 'node:stream';
import { NextRequest, NextResponse } from 'next/server';
import { MAX_FILE_SIZE } from '@/lib/constants/file-validation';
import { logger } from '@/lib/logger';
import { redis } from '@/lib/redis';
import { FilesystemBackend, getStorageBackend } from '@/lib/storage';
import { verifyProxyToken } from '@/lib/storage/filesystem';
import { isPdfMagic } from '@/lib/services/berth-pdf-parser';
export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';
// Replay-protection TTL must outlive the token itself, otherwise the
// dedup key expires and the same token can be redeemed twice. We pin it
// to the token's own expiry (clamped to a 25-day ceiling so a forged
// far-future token can't pollute Redis indefinitely). Send-out emails
// mint 24-hour tokens so the typical TTL is 24h + a small buffer.
const REPLAY_TTL_FLOOR_SECONDS = 60; // never below 60s (post-expiry tail).
const REPLAY_TTL_CEILING_SECONDS = 25 * 24 * 60 * 60; // 25 days.
export async function GET(
_req: NextRequest,
ctx: { params: Promise<{ token: string }> },
): Promise<NextResponse> {
const { token } = await ctx.params;
const backend = await getStorageBackend();
if (!(backend instanceof FilesystemBackend)) {
return NextResponse.json(
{ error: 'Storage proxy is only available in filesystem mode' },
{ status: 404 },
);
}
const result = verifyProxyToken(token, backend.getHmacSecret(), 'get');
if (!result.ok) {
logger.warn({ reason: result.reason }, 'Storage proxy token rejected');
return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 });
}
const { payload } = result;
// Single-use enforcement. SET NX with a TTL pinned to the token's own
// expiry so the dedup window never closes before the token does. Using
// the body half of the token as the dedup key (signature included
// would also work but body is enough — a reused token has the same body).
const replayKey = `storage:proxy:seen:${token.split('.')[0]}`;
const remainingSeconds = Math.max(
REPLAY_TTL_FLOOR_SECONDS,
Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60),
);
const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX');
if (setOk !== 'OK') {
logger.warn({ key: payload.k }, 'Storage proxy token replay rejected');
return NextResponse.json({ error: 'Token already used' }, { status: 403 });
}
let absolutePath: string;
try {
absolutePath = backend.resolveKeyForProxy(payload.k);
} catch (err) {
logger.warn({ err, key: payload.k }, 'Storage proxy key resolution failed');
return NextResponse.json({ error: 'Invalid key' }, { status: 400 });
}
let size: number;
try {
const stat = await fs.stat(absolutePath);
if (!stat.isFile()) {
return NextResponse.json({ error: 'Not found' }, { status: 404 });
}
size = stat.size;
} catch (err) {
const code = (err as NodeJS.ErrnoException).code;
if (code === 'ENOENT') {
return NextResponse.json({ error: 'Not found' }, { status: 404 });
}
throw err;
}
// Convert the Node Readable into a Web ReadableStream for NextResponse.
const nodeStream = createReadStream(absolutePath);
const webStream = Readable.toWeb(nodeStream) as unknown as ReadableStream<Uint8Array>;
const headers = new Headers();
headers.set('Content-Type', payload.c ?? 'application/octet-stream');
headers.set('Content-Length', String(size));
if (payload.f) {
// RFC 5987 — quote the filename and provide a UTF-8 fallback.
const safe = payload.f.replace(/"/g, '');
headers.set(
'Content-Disposition',
`attachment; filename="${safe}"; filename*=UTF-8''${encodeURIComponent(payload.f)}`,
);
}
headers.set('Cache-Control', 'private, no-store');
headers.set('X-Content-Type-Options', 'nosniff');
return new NextResponse(webStream, { status: 200, headers });
}
/**
* Filesystem-backend upload proxy. The presigned URL minted by
* `FilesystemBackend.presignUpload` points here. Without this handler the
* browser-driven berth-PDF / brochure uploads would 405 in filesystem
* deployments — the entire pluggable-storage abstraction relied on the
* GET-only counterpart for downloads.
*
* Same token-verify + single-use replay protection as GET, plus:
* - Hard size cap (rejects oversized bodies before any disk I/O).
* - Magic-byte check when the issuer declared content-type=application/pdf
* (matches the §14.6 §6c/§7c invariant: every upload path verifies
* bytes server-side, not just at the client).
*/
export async function PUT(
req: NextRequest,
ctx: { params: Promise<{ token: string }> },
): Promise<NextResponse> {
const { token } = await ctx.params;
const backend = await getStorageBackend();
if (!(backend instanceof FilesystemBackend)) {
return NextResponse.json(
{ error: 'Storage proxy is only available in filesystem mode' },
{ status: 404 },
);
}
const result = verifyProxyToken(token, backend.getHmacSecret(), 'put');
if (!result.ok) {
logger.warn({ reason: result.reason }, 'Storage proxy upload token rejected');
return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 });
}
const { payload } = result;
// Separate replay namespace from GET so a token can validly serve one
// upload AND one download (the issuer only mints the second), but a
// PUT cannot be replayed against itself.
const replayKey = `storage:proxy:put:${token.split('.')[0]}`;
const remainingSeconds = Math.max(
REPLAY_TTL_FLOOR_SECONDS,
Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60),
);
const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX');
if (setOk !== 'OK') {
logger.warn({ key: payload.k }, 'Storage proxy upload token replay rejected');
return NextResponse.json({ error: 'Token already used' }, { status: 403 });
}
// Pre-flight size check via Content-Length so a malicious caller can't
// exhaust disk by streaming hundreds of MB before we look at the body.
const contentLengthHeader = req.headers.get('content-length');
const contentLength = contentLengthHeader ? Number(contentLengthHeader) : NaN;
if (Number.isFinite(contentLength) && contentLength > MAX_FILE_SIZE) {
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap (Content-Length: ${contentLength})` },
{ status: 413 },
);
}
if (!req.body) {
return NextResponse.json({ error: 'Empty body' }, { status: 400 });
}
// Read the body into a buffer with a hard cap. Filesystem deployments are
// small-tenant (single-node only — see FilesystemBackend boot guard) so
// 50 MB ceiling fits comfortably in heap; no streaming needed.
let buffer: Buffer;
try {
const chunks: Buffer[] = [];
let total = 0;
const reader = req.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
total += value.byteLength;
if (total > MAX_FILE_SIZE) {
try {
await reader.cancel();
} catch {
/* ignore */
}
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap` },
{ status: 413 },
);
}
chunks.push(Buffer.from(value));
}
buffer = Buffer.concat(chunks);
} catch (err) {
logger.warn({ err, key: payload.k }, 'Storage proxy upload read failed');
return NextResponse.json({ error: 'Upload read failed' }, { status: 400 });
}
// Magic-byte gate: when the token was minted with `c=application/pdf`
// (the only consumer today — berth PDFs + brochures), refuse anything
// that isn't actually a PDF. Mirrors the post-upload check in
// berth-pdf.service.ts so the two paths behave identically.
if (payload.c === 'application/pdf' && !isPdfMagic(buffer)) {
return NextResponse.json(
{ error: 'Uploaded file failed PDF magic-byte check (does not start with %PDF-).' },
{ status: 400 },
);
}
try {
await backend.put(payload.k, buffer, {
contentType: payload.c ?? 'application/octet-stream',
});
} catch (err) {
logger.error({ err, key: payload.k }, 'Storage proxy upload write failed');
return NextResponse.json({ error: 'Upload write failed' }, { status: 500 });
}
return NextResponse.json({ ok: true, key: payload.k, sizeBytes: buffer.length }, { status: 200 });
}