feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
|
|
|
/**
|
|
|
|
|
* Integration test: GET /api/storage/[token]
|
|
|
|
|
*
|
|
|
|
|
* Exercises the §14.9a critical mitigations on the live route:
|
|
|
|
|
* - HMAC verification: a token signed with the wrong secret is rejected.
|
|
|
|
|
* - Expiry: an expired token is rejected.
|
|
|
|
|
* - Single-use replay: a token used twice (within the replay TTL) is
|
|
|
|
|
* rejected the second time.
|
|
|
|
|
* - Happy path: a valid token streams the file with correct headers.
|
|
|
|
|
*
|
|
|
|
|
* The storage backend itself is mocked to a FilesystemBackend rooted in a
|
|
|
|
|
* tempdir. Redis is mocked to an in-memory map so the test doesn't need
|
|
|
|
|
* a live Redis.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
import { mkdtemp, rm } from 'node:fs/promises';
|
|
|
|
|
import { tmpdir } from 'node:os';
|
|
|
|
|
import * as path from 'node:path';
|
|
|
|
|
|
|
|
|
|
import { afterEach, beforeAll, beforeEach, describe, expect, it, vi } from 'vitest';
|
|
|
|
|
|
|
|
|
|
const VALID_KEY = 'a'.repeat(64);
|
|
|
|
|
|
|
|
|
|
// Hoisted in-memory Redis. The proxy route uses SET NX EX, so we model
|
|
|
|
|
// just enough behaviour to track keys that have been seen.
|
|
|
|
|
const redisStore = new Map<string, string>();
|
|
|
|
|
|
|
|
|
|
vi.mock('@/lib/redis', () => ({
|
|
|
|
|
redis: {
|
|
|
|
|
set: vi.fn(async (key: string, value: string, ..._args: unknown[]) => {
|
|
|
|
|
// _args = ['EX', ttl, 'NX'] in our usage. Honour NX semantics.
|
|
|
|
|
const nxIndex = _args.findIndex((a) => a === 'NX');
|
|
|
|
|
if (nxIndex >= 0 && redisStore.has(key)) return null;
|
|
|
|
|
redisStore.set(key, value);
|
|
|
|
|
return 'OK';
|
|
|
|
|
}),
|
|
|
|
|
},
|
|
|
|
|
}));
|
|
|
|
|
|
|
|
|
|
vi.mock('@/lib/logger', () => ({
|
|
|
|
|
logger: { info: vi.fn(), warn: vi.fn(), error: vi.fn(), debug: vi.fn() },
|
|
|
|
|
}));
|
|
|
|
|
|
|
|
|
|
beforeAll(() => {
|
|
|
|
|
process.env.EMAIL_CREDENTIAL_KEY = VALID_KEY;
|
|
|
|
|
process.env.BETTER_AUTH_SECRET = 'a'.repeat(64);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('GET /api/storage/[token]', () => {
|
|
|
|
|
let storageRoot: string;
|
|
|
|
|
let backend: import('@/lib/storage/filesystem').FilesystemBackend;
|
|
|
|
|
let getMock: ReturnType<typeof vi.fn>;
|
|
|
|
|
|
|
|
|
|
beforeEach(async () => {
|
|
|
|
|
redisStore.clear();
|
|
|
|
|
storageRoot = await mkdtemp(path.join(tmpdir(), 'pn-storage-route-'));
|
|
|
|
|
|
|
|
|
|
// Use the real FilesystemBackend so the resolution / realpath logic is
|
|
|
|
|
// genuinely exercised; mock just `getStorageBackend()` to return it.
|
|
|
|
|
const { FilesystemBackend } = await import('@/lib/storage/filesystem');
|
|
|
|
|
backend = await FilesystemBackend.create({
|
|
|
|
|
root: storageRoot,
|
|
|
|
|
proxyHmacSecretEncrypted: null,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
getMock = vi.fn(async () => backend);
|
|
|
|
|
vi.doMock('@/lib/storage', async () => {
|
|
|
|
|
const real = await vi.importActual<typeof import('@/lib/storage')>('@/lib/storage');
|
|
|
|
|
return { ...real, getStorageBackend: getMock };
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
afterEach(async () => {
|
|
|
|
|
vi.doUnmock('@/lib/storage');
|
|
|
|
|
await rm(storageRoot, { recursive: true, force: true });
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
async function callRoute(token: string) {
|
|
|
|
|
const { GET } = await import('@/app/api/storage/[token]/route');
|
|
|
|
|
return GET(new Request(`http://test/api/storage/${token}`) as never, {
|
|
|
|
|
params: Promise.resolve({ token }),
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
it('serves a file with a valid token (happy path)', async () => {
|
|
|
|
|
await backend.put('berths/abc/file.txt', Buffer.from('hello world'), {
|
|
|
|
|
contentType: 'text/plain',
|
|
|
|
|
});
|
|
|
|
|
const presigned = await backend.presignDownload('berths/abc/file.txt', {
|
|
|
|
|
expirySeconds: 60,
|
|
|
|
|
filename: 'file.txt',
|
|
|
|
|
contentType: 'text/plain',
|
|
|
|
|
});
|
2026-05-05 04:07:03 +02:00
|
|
|
const token = presigned.url.split('/api/storage/').pop()!;
|
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
|
|
|
|
|
|
|
|
const res = await callRoute(token);
|
|
|
|
|
expect(res.status).toBe(200);
|
|
|
|
|
expect(res.headers.get('Content-Type')).toBe('text/plain');
|
|
|
|
|
expect(res.headers.get('X-Content-Type-Options')).toBe('nosniff');
|
|
|
|
|
const text = await res.text();
|
|
|
|
|
expect(text).toBe('hello world');
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('rejects a token signed with the wrong HMAC secret', async () => {
|
|
|
|
|
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
|
|
|
|
contentType: 'text/plain',
|
|
|
|
|
});
|
|
|
|
|
const { signProxyToken } = await import('@/lib/storage/filesystem');
|
|
|
|
|
const badToken = signProxyToken(
|
|
|
|
|
{
|
|
|
|
|
k: 'berths/abc/file.txt',
|
|
|
|
|
e: Math.floor(Date.now() / 1000) + 60,
|
|
|
|
|
n: 'nonce',
|
|
|
|
|
},
|
|
|
|
|
'wrong-secret',
|
|
|
|
|
);
|
|
|
|
|
const res = await callRoute(badToken);
|
|
|
|
|
expect(res.status).toBe(403);
|
|
|
|
|
const body = await res.json();
|
|
|
|
|
expect(body.error).toMatch(/Invalid|expired/i);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('rejects an expired token', async () => {
|
|
|
|
|
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
|
|
|
|
contentType: 'text/plain',
|
|
|
|
|
});
|
|
|
|
|
const { signProxyToken } = await import('@/lib/storage/filesystem');
|
|
|
|
|
const expiredToken = signProxyToken(
|
|
|
|
|
{
|
|
|
|
|
k: 'berths/abc/file.txt',
|
|
|
|
|
e: Math.floor(Date.now() / 1000) - 1,
|
|
|
|
|
n: 'nonce',
|
|
|
|
|
},
|
|
|
|
|
backend.getHmacSecret(),
|
|
|
|
|
);
|
|
|
|
|
const res = await callRoute(expiredToken);
|
|
|
|
|
expect(res.status).toBe(403);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('refuses to replay a token a second time within the TTL', async () => {
|
|
|
|
|
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
|
|
|
|
contentType: 'text/plain',
|
|
|
|
|
});
|
|
|
|
|
const presigned = await backend.presignDownload('berths/abc/file.txt', {
|
|
|
|
|
expirySeconds: 60,
|
|
|
|
|
});
|
2026-05-05 04:07:03 +02:00
|
|
|
const token = presigned.url.split('/api/storage/').pop()!;
|
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
|
|
|
|
|
|
|
|
const first = await callRoute(token);
|
|
|
|
|
expect(first.status).toBe(200);
|
|
|
|
|
await first.text();
|
|
|
|
|
|
|
|
|
|
const second = await callRoute(token);
|
|
|
|
|
expect(second.status).toBe(403);
|
|
|
|
|
const body = await second.json();
|
|
|
|
|
expect(body.error).toMatch(/already used/i);
|
|
|
|
|
});
|
|
|
|
|
});
|