Files
pn-new-crm/src/app/api/storage/[token]/route.ts

237 lines
9.1 KiB
TypeScript
Raw Normal View History

feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
/**
* Filesystem-backend download proxy.
*
* The `FilesystemBackend.presignDownload(...)` returns a CRM-internal URL of
* the form `/api/storage/<hmac-signed-token>`. This route verifies the HMAC,
* checks expiry, enforces single-use via a short Redis cache, then streams
* the file out with explicit `Content-Type` + `Content-Disposition`.
*
* §14.9a mitigations exercised here:
* - HMAC verification (timingSafeEqual via filesystem.verifyProxyToken)
* - expiry check (token includes `e` epoch seconds)
* - single-use replay protection via short Redis SET-NX
* - Node runtime only (no edge); explicit headers so Next.js doesn't try to
* process the bytes (no image optimization, no streaming transforms)
*/
import { createReadStream } from 'node:fs';
import * as fs from 'node:fs/promises';
import { Readable } from 'node:stream';
import { NextRequest, NextResponse } from 'next/server';
fix(audit-final): pre-merge hardening + expense receipt UI Final audit pass on feat/berth-recommender (3 parallel Opus agents) caught 5 critical and ~12 high-severity findings. All addressed in-branch; medium/low items deferred to docs/audit-final-deferred.md. Critical: - Add filesystem-backend PUT handler at /api/storage/[token] so presigned uploads stop 405-ing in filesystem mode (every browser-driven berth-PDF + brochure upload was broken). Same token-verify + replay protection as GET, plus magic-byte gate when c=application/pdf. - Forward req.signal into streamExpensePdf so an aborted 1000-receipt export no longer keeps grinding for minutes. - Strengthen Content-Disposition filename sanitization: \s matches CR/LF which would let documentName forge headers; restrict to [\w. -]+ and add filename* RFC 5987 fallback. - Lock public berths feed behind an explicit slug allowlist instead of ?portSlug= enumeration. - Reject cross-port interest_berths upserts (defense-in-depth on top of the recommender SQL port filter). High: - Recommender: width-only feasibility now caps length via L/W ratio so a 200ft berth doesn't surface for a 30ft beam request; total_interest_count filters out junction rows whose interest is in another port. - Mooring normalization follow-up migration (0034) catches un-hyphenated padded forms (A01) the original 0024 WHERE missed. - Send-out rate limit moved AFTER validation and scoped per-(port, user) so typos don't burn a slot and a multi-port rep can't be DoS'd by another tenant. - Default-brochure path now blocks an archived row from sneaking through the partial unique index. - NocoDB import --update-snapshot honoured under --dry-run so reps can refresh the seed JSON without committing DB writes. - PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when expenseIds are passed (was bypassed); flag rate-unavailable rows with an amber footer instead of silently treating them as 1:1; skip the USD->EUR chain when source already matches target. - expense-form-dialog: revokeObjectURL captures the URL in the closure instead of revoking the still-displayed one; reset upload state on close. - scan/page: handleClearReceipt resets in-flight scan/upload mutations; Save disabled while upload pending. - updateExpense re-asserts receipt-or-acknowledgement at the merged row so PATCH can't slip past the create-time refine. Plus the in-progress receipt upload UI for the expense form dialog (receipt picker + "I have no receipt" checkbox + warning banner) and a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration. Includes the canonical plan doc (referenced in CLAUDE.md), the handoff prompt, and a deferred-findings index for follow-up issues. 1163/1163 vitest passing. Typecheck clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:11:26 +02:00
import { MAX_FILE_SIZE } from '@/lib/constants/file-validation';
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
import { logger } from '@/lib/logger';
import { redis } from '@/lib/redis';
import { FilesystemBackend, getStorageBackend } from '@/lib/storage';
import { verifyProxyToken } from '@/lib/storage/filesystem';
fix(audit-final): pre-merge hardening + expense receipt UI Final audit pass on feat/berth-recommender (3 parallel Opus agents) caught 5 critical and ~12 high-severity findings. All addressed in-branch; medium/low items deferred to docs/audit-final-deferred.md. Critical: - Add filesystem-backend PUT handler at /api/storage/[token] so presigned uploads stop 405-ing in filesystem mode (every browser-driven berth-PDF + brochure upload was broken). Same token-verify + replay protection as GET, plus magic-byte gate when c=application/pdf. - Forward req.signal into streamExpensePdf so an aborted 1000-receipt export no longer keeps grinding for minutes. - Strengthen Content-Disposition filename sanitization: \s matches CR/LF which would let documentName forge headers; restrict to [\w. -]+ and add filename* RFC 5987 fallback. - Lock public berths feed behind an explicit slug allowlist instead of ?portSlug= enumeration. - Reject cross-port interest_berths upserts (defense-in-depth on top of the recommender SQL port filter). High: - Recommender: width-only feasibility now caps length via L/W ratio so a 200ft berth doesn't surface for a 30ft beam request; total_interest_count filters out junction rows whose interest is in another port. - Mooring normalization follow-up migration (0034) catches un-hyphenated padded forms (A01) the original 0024 WHERE missed. - Send-out rate limit moved AFTER validation and scoped per-(port, user) so typos don't burn a slot and a multi-port rep can't be DoS'd by another tenant. - Default-brochure path now blocks an archived row from sneaking through the partial unique index. - NocoDB import --update-snapshot honoured under --dry-run so reps can refresh the seed JSON without committing DB writes. - PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when expenseIds are passed (was bypassed); flag rate-unavailable rows with an amber footer instead of silently treating them as 1:1; skip the USD->EUR chain when source already matches target. - expense-form-dialog: revokeObjectURL captures the URL in the closure instead of revoking the still-displayed one; reset upload state on close. - scan/page: handleClearReceipt resets in-flight scan/upload mutations; Save disabled while upload pending. - updateExpense re-asserts receipt-or-acknowledgement at the merged row so PATCH can't slip past the create-time refine. Plus the in-progress receipt upload UI for the expense form dialog (receipt picker + "I have no receipt" checkbox + warning banner) and a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration. Includes the canonical plan doc (referenced in CLAUDE.md), the handoff prompt, and a deferred-findings index for follow-up issues. 1163/1163 vitest passing. Typecheck clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:11:26 +02:00
import { isPdfMagic } from '@/lib/services/berth-pdf-parser';
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';
fix(audit): post-review hardening across phases 0-7 15 of 17 findings from the consolidated audit (3 reviewer agents on the previously-shipped phase commits). Remaining two are nice-to-have follow-ups deferred. Critical (data integrity / security): - Public berths API: closed-deal junction rows no longer flip a berth to "Under Offer" - filter on `interests.outcome IS NULL` so won/ lost/cancelled don't pollute public-map status. Both list + single-mooring routes. - Recommender heat: cancelled outcomes now count as fall-throughs (SQL was `LIKE 'lost%'` which silently dropped them, leaving cancelled-only berths stuck in tier A). - Filesystem presignDownload returns an absolute URL (origin from APP_URL) so emailed download links resolve from external mail clients. - Magic-byte verification on the presigned-PUT path: both per-berth PDFs and brochures stream the first 5 bytes via the storage backend and reject + delete on `%PDF-` mismatch (was only enforced when the server saw the buffer; presign-PUT was wide open). - Replay-protection TTL aligned to the token's own expiry (was a fixed 30 min, but send-out tokens live 24 h). Floor 60 s, ceiling 25 days. - Brochures unique partial index on (port_id) WHERE is_default=true + 0032 migration. Closes the read-then-write race in the create/ update transactions. Important: - Recommender SQL: defense-in-depth `i.port_id = $portId` filter on the aggregates CTE. - berth-pdf service: per-berth pg_advisory_xact_lock around the version-number SELECT + insert. Storage key is now UUID-based so concurrent uploads can't collide on blob paths. Replaces `nextVersionNumber` with the tx-bound variant. - berth-pdf apply: rejects with ConflictError when parse_results contain a mooring-mismatch warning unless the caller passes `confirmMooringMismatch: true` (force-reconfirm gate was UI-only). - Send-out body: HTML-escape brochure filename in the download-link fallback (XSS guard). - parseDecimalWithUnit rejects negative numbers. - listClients DISTINCT ON for primary contact resolution: bounds contact-row count to ~2 per client. Defensive: - verifyProxyToken rejects NaN/Infinity expiries via Number.isFinite. - Replaced sql ANY() with inArray() in interest-berths. Tests: 1145 -> 1163 passing. Deferred: bulk-send rate limit (no bulk endpoint today), markdown italic regex breaking links with asterisks (cosmetic). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:07:03 +02:00
// Replay-protection TTL must outlive the token itself, otherwise the
// dedup key expires and the same token can be redeemed twice. We pin it
// to the token's own expiry (clamped to a 25-day ceiling so a forged
// far-future token can't pollute Redis indefinitely). Send-out emails
// mint 24-hour tokens so the typical TTL is 24h + a small buffer.
const REPLAY_TTL_FLOOR_SECONDS = 60; // never below 60s (post-expiry tail).
const REPLAY_TTL_CEILING_SECONDS = 25 * 24 * 60 * 60; // 25 days.
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
export async function GET(
_req: NextRequest,
ctx: { params: Promise<{ token: string }> },
): Promise<NextResponse> {
const { token } = await ctx.params;
const backend = await getStorageBackend();
if (!(backend instanceof FilesystemBackend)) {
return NextResponse.json(
{ error: 'Storage proxy is only available in filesystem mode' },
{ status: 404 },
);
}
const result = verifyProxyToken(token, backend.getHmacSecret());
if (!result.ok) {
logger.warn({ reason: result.reason }, 'Storage proxy token rejected');
return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 });
}
const { payload } = result;
fix(audit): post-review hardening across phases 0-7 15 of 17 findings from the consolidated audit (3 reviewer agents on the previously-shipped phase commits). Remaining two are nice-to-have follow-ups deferred. Critical (data integrity / security): - Public berths API: closed-deal junction rows no longer flip a berth to "Under Offer" - filter on `interests.outcome IS NULL` so won/ lost/cancelled don't pollute public-map status. Both list + single-mooring routes. - Recommender heat: cancelled outcomes now count as fall-throughs (SQL was `LIKE 'lost%'` which silently dropped them, leaving cancelled-only berths stuck in tier A). - Filesystem presignDownload returns an absolute URL (origin from APP_URL) so emailed download links resolve from external mail clients. - Magic-byte verification on the presigned-PUT path: both per-berth PDFs and brochures stream the first 5 bytes via the storage backend and reject + delete on `%PDF-` mismatch (was only enforced when the server saw the buffer; presign-PUT was wide open). - Replay-protection TTL aligned to the token's own expiry (was a fixed 30 min, but send-out tokens live 24 h). Floor 60 s, ceiling 25 days. - Brochures unique partial index on (port_id) WHERE is_default=true + 0032 migration. Closes the read-then-write race in the create/ update transactions. Important: - Recommender SQL: defense-in-depth `i.port_id = $portId` filter on the aggregates CTE. - berth-pdf service: per-berth pg_advisory_xact_lock around the version-number SELECT + insert. Storage key is now UUID-based so concurrent uploads can't collide on blob paths. Replaces `nextVersionNumber` with the tx-bound variant. - berth-pdf apply: rejects with ConflictError when parse_results contain a mooring-mismatch warning unless the caller passes `confirmMooringMismatch: true` (force-reconfirm gate was UI-only). - Send-out body: HTML-escape brochure filename in the download-link fallback (XSS guard). - parseDecimalWithUnit rejects negative numbers. - listClients DISTINCT ON for primary contact resolution: bounds contact-row count to ~2 per client. Defensive: - verifyProxyToken rejects NaN/Infinity expiries via Number.isFinite. - Replaced sql ANY() with inArray() in interest-berths. Tests: 1145 -> 1163 passing. Deferred: bulk-send rate limit (no bulk endpoint today), markdown italic regex breaking links with asterisks (cosmetic). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:07:03 +02:00
// Single-use enforcement. SET NX with a TTL pinned to the token's own
// expiry so the dedup window never closes before the token does. Using
// the body half of the token as the dedup key (signature included
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
// would also work but body is enough — a reused token has the same body).
const replayKey = `storage:proxy:seen:${token.split('.')[0]}`;
fix(audit): post-review hardening across phases 0-7 15 of 17 findings from the consolidated audit (3 reviewer agents on the previously-shipped phase commits). Remaining two are nice-to-have follow-ups deferred. Critical (data integrity / security): - Public berths API: closed-deal junction rows no longer flip a berth to "Under Offer" - filter on `interests.outcome IS NULL` so won/ lost/cancelled don't pollute public-map status. Both list + single-mooring routes. - Recommender heat: cancelled outcomes now count as fall-throughs (SQL was `LIKE 'lost%'` which silently dropped them, leaving cancelled-only berths stuck in tier A). - Filesystem presignDownload returns an absolute URL (origin from APP_URL) so emailed download links resolve from external mail clients. - Magic-byte verification on the presigned-PUT path: both per-berth PDFs and brochures stream the first 5 bytes via the storage backend and reject + delete on `%PDF-` mismatch (was only enforced when the server saw the buffer; presign-PUT was wide open). - Replay-protection TTL aligned to the token's own expiry (was a fixed 30 min, but send-out tokens live 24 h). Floor 60 s, ceiling 25 days. - Brochures unique partial index on (port_id) WHERE is_default=true + 0032 migration. Closes the read-then-write race in the create/ update transactions. Important: - Recommender SQL: defense-in-depth `i.port_id = $portId` filter on the aggregates CTE. - berth-pdf service: per-berth pg_advisory_xact_lock around the version-number SELECT + insert. Storage key is now UUID-based so concurrent uploads can't collide on blob paths. Replaces `nextVersionNumber` with the tx-bound variant. - berth-pdf apply: rejects with ConflictError when parse_results contain a mooring-mismatch warning unless the caller passes `confirmMooringMismatch: true` (force-reconfirm gate was UI-only). - Send-out body: HTML-escape brochure filename in the download-link fallback (XSS guard). - parseDecimalWithUnit rejects negative numbers. - listClients DISTINCT ON for primary contact resolution: bounds contact-row count to ~2 per client. Defensive: - verifyProxyToken rejects NaN/Infinity expiries via Number.isFinite. - Replaced sql ANY() with inArray() in interest-berths. Tests: 1145 -> 1163 passing. Deferred: bulk-send rate limit (no bulk endpoint today), markdown italic regex breaking links with asterisks (cosmetic). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:07:03 +02:00
const remainingSeconds = Math.max(
REPLAY_TTL_FLOOR_SECONDS,
Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60),
);
const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX');
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
if (setOk !== 'OK') {
logger.warn({ key: payload.k }, 'Storage proxy token replay rejected');
return NextResponse.json({ error: 'Token already used' }, { status: 403 });
}
let absolutePath: string;
try {
absolutePath = backend.resolveKeyForProxy(payload.k);
} catch (err) {
logger.warn({ err, key: payload.k }, 'Storage proxy key resolution failed');
return NextResponse.json({ error: 'Invalid key' }, { status: 400 });
}
let size: number;
try {
const stat = await fs.stat(absolutePath);
if (!stat.isFile()) {
return NextResponse.json({ error: 'Not found' }, { status: 404 });
}
size = stat.size;
} catch (err) {
const code = (err as NodeJS.ErrnoException).code;
if (code === 'ENOENT') {
return NextResponse.json({ error: 'Not found' }, { status: 404 });
}
throw err;
}
// Convert the Node Readable into a Web ReadableStream for NextResponse.
const nodeStream = createReadStream(absolutePath);
const webStream = Readable.toWeb(nodeStream) as unknown as ReadableStream<Uint8Array>;
const headers = new Headers();
headers.set('Content-Type', payload.c ?? 'application/octet-stream');
headers.set('Content-Length', String(size));
if (payload.f) {
// RFC 5987 — quote the filename and provide a UTF-8 fallback.
const safe = payload.f.replace(/"/g, '');
headers.set(
'Content-Disposition',
`attachment; filename="${safe}"; filename*=UTF-8''${encodeURIComponent(payload.f)}`,
);
}
headers.set('Cache-Control', 'private, no-store');
headers.set('X-Content-Type-Options', 'nosniff');
return new NextResponse(webStream, { status: 200, headers });
}
fix(audit-final): pre-merge hardening + expense receipt UI Final audit pass on feat/berth-recommender (3 parallel Opus agents) caught 5 critical and ~12 high-severity findings. All addressed in-branch; medium/low items deferred to docs/audit-final-deferred.md. Critical: - Add filesystem-backend PUT handler at /api/storage/[token] so presigned uploads stop 405-ing in filesystem mode (every browser-driven berth-PDF + brochure upload was broken). Same token-verify + replay protection as GET, plus magic-byte gate when c=application/pdf. - Forward req.signal into streamExpensePdf so an aborted 1000-receipt export no longer keeps grinding for minutes. - Strengthen Content-Disposition filename sanitization: \s matches CR/LF which would let documentName forge headers; restrict to [\w. -]+ and add filename* RFC 5987 fallback. - Lock public berths feed behind an explicit slug allowlist instead of ?portSlug= enumeration. - Reject cross-port interest_berths upserts (defense-in-depth on top of the recommender SQL port filter). High: - Recommender: width-only feasibility now caps length via L/W ratio so a 200ft berth doesn't surface for a 30ft beam request; total_interest_count filters out junction rows whose interest is in another port. - Mooring normalization follow-up migration (0034) catches un-hyphenated padded forms (A01) the original 0024 WHERE missed. - Send-out rate limit moved AFTER validation and scoped per-(port, user) so typos don't burn a slot and a multi-port rep can't be DoS'd by another tenant. - Default-brochure path now blocks an archived row from sneaking through the partial unique index. - NocoDB import --update-snapshot honoured under --dry-run so reps can refresh the seed JSON without committing DB writes. - PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when expenseIds are passed (was bypassed); flag rate-unavailable rows with an amber footer instead of silently treating them as 1:1; skip the USD->EUR chain when source already matches target. - expense-form-dialog: revokeObjectURL captures the URL in the closure instead of revoking the still-displayed one; reset upload state on close. - scan/page: handleClearReceipt resets in-flight scan/upload mutations; Save disabled while upload pending. - updateExpense re-asserts receipt-or-acknowledgement at the merged row so PATCH can't slip past the create-time refine. Plus the in-progress receipt upload UI for the expense form dialog (receipt picker + "I have no receipt" checkbox + warning banner) and a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration. Includes the canonical plan doc (referenced in CLAUDE.md), the handoff prompt, and a deferred-findings index for follow-up issues. 1163/1163 vitest passing. Typecheck clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:11:26 +02:00
/**
* Filesystem-backend upload proxy. The presigned URL minted by
* `FilesystemBackend.presignUpload` points here. Without this handler the
* browser-driven berth-PDF / brochure uploads would 405 in filesystem
* deployments the entire pluggable-storage abstraction relied on the
* GET-only counterpart for downloads.
*
* Same token-verify + single-use replay protection as GET, plus:
* - Hard size cap (rejects oversized bodies before any disk I/O).
* - Magic-byte check when the issuer declared content-type=application/pdf
* (matches the §14.6 §6c/§7c invariant: every upload path verifies
* bytes server-side, not just at the client).
*/
export async function PUT(
req: NextRequest,
ctx: { params: Promise<{ token: string }> },
): Promise<NextResponse> {
const { token } = await ctx.params;
const backend = await getStorageBackend();
if (!(backend instanceof FilesystemBackend)) {
return NextResponse.json(
{ error: 'Storage proxy is only available in filesystem mode' },
{ status: 404 },
);
}
const result = verifyProxyToken(token, backend.getHmacSecret());
if (!result.ok) {
logger.warn({ reason: result.reason }, 'Storage proxy upload token rejected');
return NextResponse.json({ error: 'Invalid or expired token' }, { status: 403 });
}
const { payload } = result;
// Separate replay namespace from GET so a token can validly serve one
// upload AND one download (the issuer only mints the second), but a
// PUT cannot be replayed against itself.
const replayKey = `storage:proxy:put:${token.split('.')[0]}`;
const remainingSeconds = Math.max(
REPLAY_TTL_FLOOR_SECONDS,
Math.min(REPLAY_TTL_CEILING_SECONDS, payload.e - Math.floor(Date.now() / 1000) + 60),
);
const setOk = await redis.set(replayKey, '1', 'EX', remainingSeconds, 'NX');
if (setOk !== 'OK') {
logger.warn({ key: payload.k }, 'Storage proxy upload token replay rejected');
return NextResponse.json({ error: 'Token already used' }, { status: 403 });
}
// Pre-flight size check via Content-Length so a malicious caller can't
// exhaust disk by streaming hundreds of MB before we look at the body.
const contentLengthHeader = req.headers.get('content-length');
const contentLength = contentLengthHeader ? Number(contentLengthHeader) : NaN;
if (Number.isFinite(contentLength) && contentLength > MAX_FILE_SIZE) {
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap (Content-Length: ${contentLength})` },
{ status: 413 },
);
}
if (!req.body) {
return NextResponse.json({ error: 'Empty body' }, { status: 400 });
}
// Read the body into a buffer with a hard cap. Filesystem deployments are
// small-tenant (single-node only — see FilesystemBackend boot guard) so
// 50 MB ceiling fits comfortably in heap; no streaming needed.
let buffer: Buffer;
try {
const chunks: Buffer[] = [];
let total = 0;
const reader = req.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
total += value.byteLength;
if (total > MAX_FILE_SIZE) {
try {
await reader.cancel();
} catch {
/* ignore */
}
return NextResponse.json(
{ error: `File exceeds ${MAX_FILE_SIZE} byte cap` },
{ status: 413 },
);
}
chunks.push(Buffer.from(value));
}
buffer = Buffer.concat(chunks);
} catch (err) {
logger.warn({ err, key: payload.k }, 'Storage proxy upload read failed');
return NextResponse.json({ error: 'Upload read failed' }, { status: 400 });
}
// Magic-byte gate: when the token was minted with `c=application/pdf`
// (the only consumer today — berth PDFs + brochures), refuse anything
// that isn't actually a PDF. Mirrors the post-upload check in
// berth-pdf.service.ts so the two paths behave identically.
if (payload.c === 'application/pdf' && !isPdfMagic(buffer)) {
return NextResponse.json(
{ error: 'Uploaded file failed PDF magic-byte check (does not start with %PDF-).' },
{ status: 400 },
);
}
try {
await backend.put(payload.k, buffer, {
contentType: payload.c ?? 'application/octet-stream',
});
} catch (err) {
logger.error({ err, key: payload.k }, 'Storage proxy upload write failed');
return NextResponse.json({ error: 'Upload write failed' }, { status: 500 });
}
return NextResponse.json({ ok: true, key: payload.k, sizeBytes: buffer.length }, { status: 200 });
}