Files
pn-new-crm/scripts/migrate-storage.ts

30 lines
988 B
TypeScript
Raw Normal View History

feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
/**
* Storage backend migration CLI see §4.7a + §14.9a of
* docs/berth-recommender-and-pdf-plan.md.
*
* pnpm tsx scripts/migrate-storage.ts --from s3 --to filesystem [--dry-run]
* pnpm tsx scripts/migrate-storage.ts --from filesystem --to s3
*
* The actual migration logic lives in `src/lib/storage/migrate.ts` so the
* admin UI's "Switch backend" button can run the exact same code path. This
* file is a thin CLI wrapper.
*/
import { logger } from '@/lib/logger';
import { parseArgs, runMigration } from '@/lib/storage/migrate';
async function main(): Promise<void> {
const args = parseArgs(process.argv.slice(2));
logger.info({ args }, 'Starting storage migration');
const result = await runMigration(args);
logger.info({ result }, 'Storage migration complete');
console.log(JSON.stringify(result, null, 2));
process.exit(0);
}
main().catch((err) => {
logger.error({ err }, 'Storage migration failed');
console.error(err);
process.exit(2);
});