feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
40
src/app/api/v1/admin/storage/migrate/route.ts
Normal file
40
src/app/api/v1/admin/storage/migrate/route.ts
Normal file
@@ -0,0 +1,40 @@
|
||||
/**
|
||||
* Admin-triggered storage migration. Same code path as `scripts/migrate-storage.ts`
|
||||
* (both delegate to `runMigration()` in `@/lib/storage/migrate`). Body:
|
||||
* { from: 's3'|'filesystem', to: 's3'|'filesystem', dryRun?: boolean }
|
||||
*
|
||||
* Super-admin only. The `/[portSlug]/admin` segment is already gated; this
|
||||
* route enforces the same constraint defensively.
|
||||
*/
|
||||
|
||||
import { NextResponse } from 'next/server';
|
||||
import { z } from 'zod';
|
||||
|
||||
import { withAuth } from '@/lib/api/helpers';
|
||||
import { parseBody } from '@/lib/api/route-helpers';
|
||||
import { errorResponse, ForbiddenError } from '@/lib/errors';
|
||||
import { runMigration } from '@/lib/storage/migrate';
|
||||
|
||||
const schema = z.object({
|
||||
from: z.enum(['s3', 'filesystem']),
|
||||
to: z.enum(['s3', 'filesystem']),
|
||||
dryRun: z.boolean().default(false),
|
||||
});
|
||||
|
||||
export const runtime = 'nodejs';
|
||||
|
||||
export const POST = withAuth(async (req, ctx) => {
|
||||
try {
|
||||
if (!ctx.isSuperAdmin) {
|
||||
throw new ForbiddenError('Super admin only');
|
||||
}
|
||||
const body = await parseBody(req, schema);
|
||||
if (body.from === body.to) {
|
||||
return NextResponse.json({ error: 'from and to must differ' }, { status: 400 });
|
||||
}
|
||||
const result = await runMigration({ ...body, userId: ctx.userId });
|
||||
return NextResponse.json({ data: result });
|
||||
} catch (error) {
|
||||
return errorResponse(error);
|
||||
}
|
||||
});
|
||||
72
src/app/api/v1/admin/storage/route.ts
Normal file
72
src/app/api/v1/admin/storage/route.ts
Normal file
@@ -0,0 +1,72 @@
|
||||
/**
|
||||
* Admin storage status + connection test. Super-admin only.
|
||||
*
|
||||
* GET /api/v1/admin/storage — current backend + capacity stats
|
||||
* POST /api/v1/admin/storage/test — exercise list/put/get/delete on s3
|
||||
*/
|
||||
|
||||
import { NextResponse } from 'next/server';
|
||||
|
||||
import { withAuth } from '@/lib/api/helpers';
|
||||
import { errorResponse, ForbiddenError } from '@/lib/errors';
|
||||
import { TABLES_WITH_STORAGE_KEYS } from '@/lib/storage/migrate';
|
||||
import { getStorageBackend } from '@/lib/storage';
|
||||
import { S3Backend } from '@/lib/storage/s3';
|
||||
import { db } from '@/lib/db';
|
||||
import { sql } from 'drizzle-orm';
|
||||
|
||||
export const runtime = 'nodejs';
|
||||
|
||||
export const GET = withAuth(async (_req, ctx) => {
|
||||
try {
|
||||
if (!ctx.isSuperAdmin) {
|
||||
throw new ForbiddenError('Super admin only');
|
||||
}
|
||||
const backend = await getStorageBackend();
|
||||
|
||||
// Aggregate row count + total bytes across every storage-bearing table.
|
||||
let fileCount = 0;
|
||||
const totalBytes = 0;
|
||||
for (const tbl of TABLES_WITH_STORAGE_KEYS) {
|
||||
const result = await db.execute(
|
||||
sql.raw(
|
||||
`SELECT COUNT(*)::bigint AS n FROM ${tbl.table} WHERE ${tbl.keyColumn} IS NOT NULL`,
|
||||
),
|
||||
);
|
||||
const rows = (
|
||||
Array.isArray(result) ? result : ((result as { rows?: unknown[] }).rows ?? [])
|
||||
) as Array<{ n: number | string }>;
|
||||
fileCount += Number(rows[0]?.n ?? 0);
|
||||
}
|
||||
|
||||
return NextResponse.json({
|
||||
data: {
|
||||
backend: backend.name,
|
||||
fileCount,
|
||||
totalBytes,
|
||||
tablesTracked: TABLES_WITH_STORAGE_KEYS.map((t) => t.table),
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
return errorResponse(error);
|
||||
}
|
||||
});
|
||||
|
||||
export const POST = withAuth(async (_req, ctx) => {
|
||||
try {
|
||||
if (!ctx.isSuperAdmin) {
|
||||
throw new ForbiddenError('Super admin only');
|
||||
}
|
||||
const backend = await getStorageBackend();
|
||||
if (!(backend instanceof S3Backend)) {
|
||||
return NextResponse.json(
|
||||
{ ok: false, error: 'Test connection only available for S3 backend' },
|
||||
{ status: 400 },
|
||||
);
|
||||
}
|
||||
const result = await backend.healthCheck();
|
||||
return NextResponse.json(result);
|
||||
} catch (error) {
|
||||
return errorResponse(error);
|
||||
}
|
||||
});
|
||||
Reference in New Issue
Block a user