Files
pn-new-crm/src/lib/storage/index.ts

229 lines
8.0 KiB
TypeScript
Raw Normal View History

feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
/**
* Pluggable storage backend (Phase 6a see docs/berth-recommender-and-pdf-plan.md §4.7a).
*
* The CRM stores files (per-berth PDFs, brochures, GDPR exports, etc.) through a
* single `StorageBackend` abstraction. The deployment chooses between an
* S3-compatible store (MinIO / AWS S3 / Backblaze B2 / Cloudflare R2 / Wasabi /
* Tigris) and a local filesystem at runtime via `system_settings.storage_backend`.
*
* Callers should always import from this barrel never from `s3.ts` or
* `filesystem.ts` directly so the factory wiring stays the single source of
* truth.
*/
import { and, eq, isNull } from 'drizzle-orm';
import { db } from '@/lib/db';
import { systemSettings } from '@/lib/db/schema/system';
import { logger } from '@/lib/logger';
import { FilesystemBackend } from './filesystem';
import { S3Backend } from './s3';
export type StorageBackendName = 's3' | 'filesystem';
export interface PutOpts {
contentType: string;
/** Optional pre-computed sha256 hex — if absent, the backend computes one. */
sha256?: string;
/** Bytes (for streams that don't expose .length); used for capacity pre-flight. */
sizeBytes?: number;
/** Optional content-disposition for downloads (filesystem proxy only). */
contentDisposition?: string;
}
export interface PresignOpts {
/** TTL in seconds. Default 900 (15min) per SECURITY-GUIDELINES §7.1. */
expirySeconds?: number;
contentType?: string;
/** Filename used in Content-Disposition for downloads. */
filename?: string;
}
export interface StorageBackend {
/** Upload bytes. Returns the canonical key, size, and sha256 hex. */
put(
key: string,
body: Buffer | NodeJS.ReadableStream,
opts: PutOpts,
): Promise<{ key: string; sizeBytes: number; sha256: string }>;
/** Stream a file out. Throws NotFoundError if missing. */
get(key: string): Promise<NodeJS.ReadableStream>;
/** Existence + size check without reading the full body. Returns null when missing. */
head(key: string): Promise<{ sizeBytes: number; contentType: string } | null>;
/** Delete. Idempotent — missing keys must not throw. */
delete(key: string): Promise<void>;
/** Generate a short-lived URL the browser can PUT to. */
presignUpload(key: string, opts: PresignOpts): Promise<{ url: string; method: 'PUT' | 'POST' }>;
/** Generate a short-lived URL the browser can GET from. */
presignDownload(key: string, opts: PresignOpts): Promise<{ url: string; expiresAt: Date }>;
readonly name: StorageBackendName;
}
// ─── factory ────────────────────────────────────────────────────────────────
interface CachedFactory {
backend: StorageBackend;
/** Resolved at cache-time so we can re-fetch when settings change. */
configFingerprint: string;
}
let cached: CachedFactory | null = null;
/**
* Reset the per-process backend cache. Called after `system_settings` writes
* via the existing pub/sub invalidation hook, and exposed for tests.
*/
export function resetStorageBackendCache(): void {
cached = null;
}
interface StorageConfigSnapshot {
backend: StorageBackendName;
s3?: {
endpoint?: string;
region?: string;
bucket?: string;
accessKey?: string;
secretKeyEncrypted?: string;
forcePathStyle?: boolean;
};
filesystem?: {
root?: string;
proxyHmacSecretEncrypted?: string;
};
}
async function readGlobalSetting<T = unknown>(key: string): Promise<T | null> {
const [row] = await db
.select()
.from(systemSettings)
.where(and(eq(systemSettings.key, key), isNull(systemSettings.portId)));
return (row?.value as T | undefined) ?? null;
}
async function loadStorageConfig(): Promise<StorageConfigSnapshot> {
// Each setting key is a separate row. We read them in parallel.
const keys = [
'storage_backend',
'storage_s3_endpoint',
'storage_s3_region',
'storage_s3_bucket',
'storage_s3_access_key',
'storage_s3_secret_key_encrypted',
'storage_s3_force_path_style',
'storage_filesystem_root',
'storage_proxy_hmac_secret_encrypted',
] as const;
const [
backendRaw,
s3Endpoint,
s3Region,
s3Bucket,
s3AccessKey,
s3SecretKeyEncrypted,
s3ForcePathStyle,
fsRoot,
fsHmacSecretEncrypted,
] = await Promise.all(keys.map((k) => readGlobalSetting<unknown>(k)));
const backend: StorageBackendName = backendRaw === 'filesystem' ? 'filesystem' : 's3';
return {
backend,
s3: {
endpoint: typeof s3Endpoint === 'string' ? s3Endpoint : undefined,
region: typeof s3Region === 'string' ? s3Region : undefined,
bucket: typeof s3Bucket === 'string' ? s3Bucket : undefined,
accessKey: typeof s3AccessKey === 'string' ? s3AccessKey : undefined,
secretKeyEncrypted:
typeof s3SecretKeyEncrypted === 'string' ? s3SecretKeyEncrypted : undefined,
forcePathStyle:
typeof s3ForcePathStyle === 'boolean' ? s3ForcePathStyle : Boolean(s3ForcePathStyle),
},
filesystem: {
root: typeof fsRoot === 'string' ? fsRoot : undefined,
proxyHmacSecretEncrypted:
typeof fsHmacSecretEncrypted === 'string' ? fsHmacSecretEncrypted : undefined,
},
};
}
fix(audit): backlog sweep — partial archived indexes, custom-fields per-entity gate, polish Wave through the 2026-05-07 backlog of small/concrete audit-final-deferred items (deferring the Documenso Phases 2-7 build and items needing design decisions or live external instances). DB schema: - Migration 0046 converts 5 composite (port_id, archived_at) indexes to partial WHERE archived_at IS NULL — clients, interests, yachts, and both residential tables. Smaller, faster planner choice for the dominant list-query shape. Multi-tenant isolation: - document_sends now verifies recipient.interestId belongs to the port before landing on the audit row (the surrounding clientId check was already port-scoped; interestId pollution was the gap). Routes / API: - /api/v1/custom-fields/[entityId] requires entityType query param and gates on the matching resource permission (clients/interests/berths/ yachts/companies). Fixes the cross-resource gap where a user with clients.view could read company custom-field values. - Admin user list trash button wrapped in PermissionGate (edit was already gated; remove was not). Service polish: - berth-recommender accepts string-shaped JSONB booleans ('true'/'false') so admin UIs that wrap values as strings don't silently fall through to defaults. - expense-pdf renderReceiptHeader anchors all text positions to a captured baseY rather than reading mutating doc.y after rect+stroke. Headers no longer drift on the first receipt page after a soft page break. - berth-pdf apply: collect non-finite numeric coercion drops + warn-log them so partial silent drops are observable (was invisible because the no-fields-supplied check only fires when ALL drop). - Storage cache fingerprint comment documenting the encrypted-secret invariant + the explicit invalidation hook. UI polish: - invoice-detail typed: replaced two `any` casts with a proper InvoiceDetailData / LineItem / LinkedExpense interface set. - YachtForm now accepts initialOwner prop. Wired through: - client-yachts-tab passes { type: 'client', id: clientId } - interest-form passes { type: 'client', id: selectedClientId } - Interest-form yacht picker now includes company-owned yachts where the selected client is a member (fetches client.companies and feeds YachtPicker an array filter). Plus an inline "Add new" button that opens YachtForm pre-bound to the client. - YachtPicker accepts ownerFilter as single OR array for "match any" semantics. BACKLOG.md updated with what landed vs what's still deferred (and why each deferred item is genuinely larger than this push warrants). Tests: 1185/1185 vitest, tsc clean.
2026-05-07 21:45:42 +02:00
/**
* The fingerprint includes encrypted-secret material because rotating the
* secret should invalidate the cached client. After a key rotation the
* settings-write hook calls `resetStorageBackendCache()` explicitly, so
* this comparison is a defense-in-depth backstop rather than the primary
* invalidation path. If you ever change `loadStorageConfig` to read
* additional sensitive material, make sure the rotation flow keeps
* resetting the cache relying on fingerprint diff alone means the old
* client is held in memory until the next mismatch.
*/
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
function fingerprint(cfg: StorageConfigSnapshot): string {
return JSON.stringify(cfg);
}
/**
* Resolve the active backend. Caches per-process; the cache is invalidated by
* `resetStorageBackendCache()` (called when `system_settings.storage_backend`
* changes via the migration flow).
*/
export async function getStorageBackend(): Promise<StorageBackend> {
const cfg = await loadStorageConfig();
const fp = fingerprint(cfg);
if (cached && cached.configFingerprint === fp) {
return cached.backend;
}
const backend = await buildBackend(cfg);
cached = { backend, configFingerprint: fp };
logger.info({ backend: backend.name }, 'Storage backend resolved');
return backend;
}
async function buildBackend(cfg: StorageConfigSnapshot): Promise<StorageBackend> {
if (cfg.backend === 'filesystem') {
return FilesystemBackend.create({
root: cfg.filesystem?.root ?? './storage',
proxyHmacSecretEncrypted: cfg.filesystem?.proxyHmacSecretEncrypted ?? null,
});
}
return S3Backend.create({
endpoint: cfg.s3?.endpoint,
region: cfg.s3?.region,
bucket: cfg.s3?.bucket,
accessKey: cfg.s3?.accessKey,
secretKeyEncrypted: cfg.s3?.secretKeyEncrypted,
forcePathStyle: cfg.s3?.forcePathStyle,
});
}
// ─── url helpers ────────────────────────────────────────────────────────────
/**
* Convenience wrapper that returns just the presigned download URL the most
* common need at call sites that don't track expiry. Mirrors the legacy
* `getPresignedUrl(key)` helper in `@/lib/minio` but routes through the
* active backend so filesystem-mode deployments work too.
*/
export async function presignDownloadUrl(
key: string,
expirySeconds = 900,
filename?: string,
): Promise<string> {
const backend = await getStorageBackend();
const { url } = await backend.presignDownload(key, { expirySeconds, filename });
return url;
}
feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs, brochures) without touching those domains yet. New files: - src/lib/storage/index.ts StorageBackend interface + per-process factory keyed on system_settings. - src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/ Wasabi/Tigris) wrapping the existing minio JS client. Includes a healthCheck() used by the admin "Test connection" button. - src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a mitigations baked in. - src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock, per-row resumable progress markers, sha256 round-trip verification, atomic storage_backend flip on success. - scripts/migrate-storage.ts Thin CLI shim around runMigration(). - src/app/api/storage/[token]/route.ts Filesystem proxy GET. Verifies HMAC, enforces single-use replay protection via Redis SET NX, streams via NextResponse ReadableStream with explicit Content-Type + Content-Disposition. Node runtime only. - src/app/api/v1/admin/storage/route.ts GET status + POST connection test. - src/app/api/v1/admin/storage/migrate/route.ts Super-admin-only POST that runs the exact same runMigration() as the CLI. - src/app/(dashboard)/[portSlug]/admin/storage/page.tsx Super-admin admin UI (current backend, capacity stats, switch button with dry-run, test connection, backup hint). - src/components/admin/storage-admin-panel.tsx Client component for the page above. §14.9a critical mitigations implemented: - Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$; `..`, `.`, `//`, leading `/`, and overlength keys rejected. - Realpath: storage root realpath'd at create time, every per-key resolution checked against the realpath'd prefix. - Storage root created (or chmod'd) to 0o700. - Multi-node refusal: FilesystemBackend.create() throws when MULTI_NODE_DEPLOYMENT=true. - HMAC token: sha256-HMAC over the (key, expiry, nonce, filename, content-type) payload. Verified with timingSafeEqual; bad sig, expired, or invalid-key payloads all return 403. - Single-use replay: token body cached in Redis SET NX EX 1800s. - sha256 round-trip: copyAndVerify() re-fetches from the target after put() and aborts the migration on any mismatch. - Free-disk pre-flight: when migrating to filesystem, sums byte counts via source.head() and aborts if free space < total * 1.2. - pg_advisory_lock(0xc7000a01) prevents concurrent migrations. - Resumable: per-row progress markers in _storage_migration_progress. system_settings keys read by the factory (jsonb, no schema change): storage_backend, storage_s3_endpoint, storage_s3_region, storage_s3_bucket, storage_s3_access_key, storage_s3_secret_key_encrypted, storage_s3_force_path_style, storage_filesystem_root, storage_proxy_hmac_secret_encrypted. Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage` (./storage added to .gitignore). Tests added (34 tests, all green): - tests/unit/storage/filesystem-backend.test.ts — key validation allow/reject matrix, realpath escape, 0o700 perms, multi-node refusal, HMAC token sign/verify/tamper/expire/invalid-key. - tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on round-trip aborts the migration. - tests/integration/storage/proxy-route.test.ts — happy path, wrong HMAC secret, expired token, replay rejection. Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is intentionally empty. berth_pdf_versions and brochure_versions land in Phase 6b and join the list there. Existing s3_key columns: only gdpr_export_jobs.storage_key, already named correctly — no rename needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 03:15:59 +02:00
// ─── re-exports ─────────────────────────────────────────────────────────────
export { S3Backend } from './s3';
export { FilesystemBackend, validateStorageKey } from './filesystem';