feat(storage): pluggable s3-or-filesystem backend + migration CLI + admin UI
Phase 6a from docs/berth-recommender-and-pdf-plan.md §4.7a + §14.9a. Lays
the storage groundwork for Phase 6b/7 file-bearing schemas (per-berth PDFs,
brochures) without touching those domains yet.
New files:
- src/lib/storage/index.ts StorageBackend interface + per-process
factory keyed on system_settings.
- src/lib/storage/s3.ts S3-compatible backend (MinIO/AWS/B2/R2/
Wasabi/Tigris) wrapping the existing minio
JS client. Includes a healthCheck() used
by the admin "Test connection" button.
- src/lib/storage/filesystem.ts Local filesystem backend with all §14.9a
mitigations baked in.
- src/lib/storage/migrate.ts Shared migration core — pg_advisory_lock,
per-row resumable progress markers,
sha256 round-trip verification, atomic
storage_backend flip on success.
- scripts/migrate-storage.ts Thin CLI shim around runMigration().
- src/app/api/storage/[token]/route.ts
Filesystem proxy GET. Verifies HMAC,
enforces single-use replay protection
via Redis SET NX, streams via NextResponse
ReadableStream with explicit Content-Type
+ Content-Disposition. Node runtime only.
- src/app/api/v1/admin/storage/route.ts
GET status + POST connection test.
- src/app/api/v1/admin/storage/migrate/route.ts
Super-admin-only POST that runs the
exact same runMigration() as the CLI.
- src/app/(dashboard)/[portSlug]/admin/storage/page.tsx
Super-admin admin UI (current backend,
capacity stats, switch button with
dry-run, test connection, backup hint).
- src/components/admin/storage-admin-panel.tsx
Client component for the page above.
§14.9a critical mitigations implemented:
- Path-traversal: storage keys validated against ^[a-zA-Z0-9/_.-]+$;
`..`, `.`, `//`, leading `/`, and overlength keys rejected.
- Realpath: storage root realpath'd at create time, every per-key
resolution checked against the realpath'd prefix.
- Storage root created (or chmod'd) to 0o700.
- Multi-node refusal: FilesystemBackend.create() throws when
MULTI_NODE_DEPLOYMENT=true.
- HMAC token: sha256-HMAC over the (key, expiry, nonce, filename,
content-type) payload. Verified with timingSafeEqual; bad sig,
expired, or invalid-key payloads all return 403.
- Single-use replay: token body cached in Redis SET NX EX 1800s.
- sha256 round-trip: copyAndVerify() re-fetches from the target after
put() and aborts the migration on any mismatch.
- Free-disk pre-flight: when migrating to filesystem, sums byte counts
via source.head() and aborts if free space < total * 1.2.
- pg_advisory_lock(0xc7000a01) prevents concurrent migrations.
- Resumable: per-row progress markers in _storage_migration_progress.
system_settings keys read by the factory (jsonb, no schema change):
storage_backend, storage_s3_endpoint, storage_s3_region,
storage_s3_bucket, storage_s3_access_key,
storage_s3_secret_key_encrypted, storage_s3_force_path_style,
storage_filesystem_root, storage_proxy_hmac_secret_encrypted.
Defaults: storage_backend=`s3`, storage_filesystem_root=`./storage`
(./storage added to .gitignore).
Tests added (34 tests, all green):
- tests/unit/storage/filesystem-backend.test.ts — key validation
allow/reject matrix, realpath escape, 0o700 perms, multi-node
refusal, HMAC token sign/verify/tamper/expire/invalid-key.
- tests/unit/storage/copy-and-verify.test.ts — sha256 mismatch on
round-trip aborts the migration.
- tests/integration/storage/proxy-route.test.ts — happy path, wrong
HMAC secret, expired token, replay rejection.
Phase 6a ships zero file-bearing tables — TABLES_WITH_STORAGE_KEYS is
intentionally empty. berth_pdf_versions and brochure_versions land in
Phase 6b and join the list there. Existing s3_key columns: only
gdpr_export_jobs.storage_key, already named correctly — no rename needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
158
tests/integration/storage/proxy-route.test.ts
Normal file
158
tests/integration/storage/proxy-route.test.ts
Normal file
@@ -0,0 +1,158 @@
|
||||
/**
|
||||
* Integration test: GET /api/storage/[token]
|
||||
*
|
||||
* Exercises the §14.9a critical mitigations on the live route:
|
||||
* - HMAC verification: a token signed with the wrong secret is rejected.
|
||||
* - Expiry: an expired token is rejected.
|
||||
* - Single-use replay: a token used twice (within the replay TTL) is
|
||||
* rejected the second time.
|
||||
* - Happy path: a valid token streams the file with correct headers.
|
||||
*
|
||||
* The storage backend itself is mocked to a FilesystemBackend rooted in a
|
||||
* tempdir. Redis is mocked to an in-memory map so the test doesn't need
|
||||
* a live Redis.
|
||||
*/
|
||||
|
||||
import { mkdtemp, rm } from 'node:fs/promises';
|
||||
import { tmpdir } from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
|
||||
import { afterEach, beforeAll, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
const VALID_KEY = 'a'.repeat(64);
|
||||
|
||||
// Hoisted in-memory Redis. The proxy route uses SET NX EX, so we model
|
||||
// just enough behaviour to track keys that have been seen.
|
||||
const redisStore = new Map<string, string>();
|
||||
|
||||
vi.mock('@/lib/redis', () => ({
|
||||
redis: {
|
||||
set: vi.fn(async (key: string, value: string, ..._args: unknown[]) => {
|
||||
// _args = ['EX', ttl, 'NX'] in our usage. Honour NX semantics.
|
||||
const nxIndex = _args.findIndex((a) => a === 'NX');
|
||||
if (nxIndex >= 0 && redisStore.has(key)) return null;
|
||||
redisStore.set(key, value);
|
||||
return 'OK';
|
||||
}),
|
||||
},
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/logger', () => ({
|
||||
logger: { info: vi.fn(), warn: vi.fn(), error: vi.fn(), debug: vi.fn() },
|
||||
}));
|
||||
|
||||
beforeAll(() => {
|
||||
process.env.EMAIL_CREDENTIAL_KEY = VALID_KEY;
|
||||
process.env.BETTER_AUTH_SECRET = 'a'.repeat(64);
|
||||
});
|
||||
|
||||
describe('GET /api/storage/[token]', () => {
|
||||
let storageRoot: string;
|
||||
let backend: import('@/lib/storage/filesystem').FilesystemBackend;
|
||||
let getMock: ReturnType<typeof vi.fn>;
|
||||
|
||||
beforeEach(async () => {
|
||||
redisStore.clear();
|
||||
storageRoot = await mkdtemp(path.join(tmpdir(), 'pn-storage-route-'));
|
||||
|
||||
// Use the real FilesystemBackend so the resolution / realpath logic is
|
||||
// genuinely exercised; mock just `getStorageBackend()` to return it.
|
||||
const { FilesystemBackend } = await import('@/lib/storage/filesystem');
|
||||
backend = await FilesystemBackend.create({
|
||||
root: storageRoot,
|
||||
proxyHmacSecretEncrypted: null,
|
||||
});
|
||||
|
||||
getMock = vi.fn(async () => backend);
|
||||
vi.doMock('@/lib/storage', async () => {
|
||||
const real = await vi.importActual<typeof import('@/lib/storage')>('@/lib/storage');
|
||||
return { ...real, getStorageBackend: getMock };
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
vi.doUnmock('@/lib/storage');
|
||||
await rm(storageRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
async function callRoute(token: string) {
|
||||
const { GET } = await import('@/app/api/storage/[token]/route');
|
||||
return GET(new Request(`http://test/api/storage/${token}`) as never, {
|
||||
params: Promise.resolve({ token }),
|
||||
});
|
||||
}
|
||||
|
||||
it('serves a file with a valid token (happy path)', async () => {
|
||||
await backend.put('berths/abc/file.txt', Buffer.from('hello world'), {
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
const presigned = await backend.presignDownload('berths/abc/file.txt', {
|
||||
expirySeconds: 60,
|
||||
filename: 'file.txt',
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
const token = presigned.url.replace('/api/storage/', '');
|
||||
|
||||
const res = await callRoute(token);
|
||||
expect(res.status).toBe(200);
|
||||
expect(res.headers.get('Content-Type')).toBe('text/plain');
|
||||
expect(res.headers.get('X-Content-Type-Options')).toBe('nosniff');
|
||||
const text = await res.text();
|
||||
expect(text).toBe('hello world');
|
||||
});
|
||||
|
||||
it('rejects a token signed with the wrong HMAC secret', async () => {
|
||||
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
const { signProxyToken } = await import('@/lib/storage/filesystem');
|
||||
const badToken = signProxyToken(
|
||||
{
|
||||
k: 'berths/abc/file.txt',
|
||||
e: Math.floor(Date.now() / 1000) + 60,
|
||||
n: 'nonce',
|
||||
},
|
||||
'wrong-secret',
|
||||
);
|
||||
const res = await callRoute(badToken);
|
||||
expect(res.status).toBe(403);
|
||||
const body = await res.json();
|
||||
expect(body.error).toMatch(/Invalid|expired/i);
|
||||
});
|
||||
|
||||
it('rejects an expired token', async () => {
|
||||
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
const { signProxyToken } = await import('@/lib/storage/filesystem');
|
||||
const expiredToken = signProxyToken(
|
||||
{
|
||||
k: 'berths/abc/file.txt',
|
||||
e: Math.floor(Date.now() / 1000) - 1,
|
||||
n: 'nonce',
|
||||
},
|
||||
backend.getHmacSecret(),
|
||||
);
|
||||
const res = await callRoute(expiredToken);
|
||||
expect(res.status).toBe(403);
|
||||
});
|
||||
|
||||
it('refuses to replay a token a second time within the TTL', async () => {
|
||||
await backend.put('berths/abc/file.txt', Buffer.from('hello'), {
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
const presigned = await backend.presignDownload('berths/abc/file.txt', {
|
||||
expirySeconds: 60,
|
||||
});
|
||||
const token = presigned.url.replace('/api/storage/', '');
|
||||
|
||||
const first = await callRoute(token);
|
||||
expect(first.status).toBe(200);
|
||||
await first.text();
|
||||
|
||||
const second = await callRoute(token);
|
||||
expect(second.status).toBe(403);
|
||||
const body = await second.json();
|
||||
expect(body.error).toMatch(/already used/i);
|
||||
});
|
||||
});
|
||||
103
tests/unit/storage/copy-and-verify.test.ts
Normal file
103
tests/unit/storage/copy-and-verify.test.ts
Normal file
@@ -0,0 +1,103 @@
|
||||
/**
|
||||
* Unit test for the sha256 verification path in `copyAndVerify` from
|
||||
* `src/lib/storage/migrate.ts`. Uses an in-memory mock backend so we don't
|
||||
* need MinIO or the filesystem.
|
||||
*
|
||||
* §14.9a expects: any sha256 mismatch on the round-trip aborts the migration.
|
||||
*/
|
||||
|
||||
import { Readable } from 'node:stream';
|
||||
|
||||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { copyAndVerify } from '@/lib/storage/migrate';
|
||||
import type { PresignOpts, PutOpts, StorageBackend } from '@/lib/storage';
|
||||
|
||||
class InMemoryBackend implements StorageBackend {
|
||||
readonly name = 's3' as const;
|
||||
readonly store = new Map<string, { body: Buffer; contentType: string }>();
|
||||
/** When set, get(key) returns this corrupted body instead of the stored one. */
|
||||
corruptOnRead: Buffer | null = null;
|
||||
|
||||
async put(
|
||||
key: string,
|
||||
body: Buffer | NodeJS.ReadableStream,
|
||||
opts: PutOpts,
|
||||
): Promise<{ key: string; sizeBytes: number; sha256: string }> {
|
||||
const buffer = Buffer.isBuffer(body) ? body : await streamToBuffer(body);
|
||||
const sha256 =
|
||||
opts.sha256 ??
|
||||
(await import('node:crypto')).createHash('sha256').update(buffer).digest('hex');
|
||||
this.store.set(key, { body: buffer, contentType: opts.contentType });
|
||||
return { key, sizeBytes: buffer.length, sha256 };
|
||||
}
|
||||
|
||||
async get(key: string): Promise<NodeJS.ReadableStream> {
|
||||
if (this.corruptOnRead) return Readable.from([this.corruptOnRead]);
|
||||
const r = this.store.get(key);
|
||||
if (!r) throw new Error(`not found: ${key}`);
|
||||
return Readable.from([r.body]);
|
||||
}
|
||||
|
||||
async head(key: string) {
|
||||
const r = this.store.get(key);
|
||||
if (!r) return null;
|
||||
return { sizeBytes: r.body.length, contentType: r.contentType };
|
||||
}
|
||||
|
||||
async delete(key: string): Promise<void> {
|
||||
this.store.delete(key);
|
||||
}
|
||||
|
||||
async presignUpload(_key: string, _opts: PresignOpts) {
|
||||
return { url: 'mem://upload', method: 'PUT' as const };
|
||||
}
|
||||
|
||||
async presignDownload(_key: string, _opts: PresignOpts) {
|
||||
return { url: 'mem://download', expiresAt: new Date(Date.now() + 1000) };
|
||||
}
|
||||
}
|
||||
|
||||
async function streamToBuffer(stream: NodeJS.ReadableStream): Promise<Buffer> {
|
||||
const chunks: Buffer[] = [];
|
||||
for await (const c of stream) chunks.push(Buffer.isBuffer(c) ? c : Buffer.from(c as string));
|
||||
return Buffer.concat(chunks);
|
||||
}
|
||||
|
||||
describe('copyAndVerify', () => {
|
||||
it('round-trips a buffer and reports matching sha256', async () => {
|
||||
const src = new InMemoryBackend();
|
||||
const dst = new InMemoryBackend();
|
||||
const payload = Buffer.from('hello world payload');
|
||||
await src.put('a/b.txt', payload, { contentType: 'text/plain' });
|
||||
|
||||
const result = await copyAndVerify(src, dst, {
|
||||
tableName: 't',
|
||||
pk: '1',
|
||||
key: 'a/b.txt',
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
expect(result.sizeBytes).toBe(payload.length);
|
||||
expect(result.sha256).toHaveLength(64);
|
||||
expect(dst.store.get('a/b.txt')?.body.equals(payload)).toBe(true);
|
||||
});
|
||||
|
||||
it('throws when target re-read returns corrupt bytes', async () => {
|
||||
const src = new InMemoryBackend();
|
||||
const dst = new InMemoryBackend();
|
||||
await src.put('a/b.txt', Buffer.from('legit'), { contentType: 'text/plain' });
|
||||
|
||||
// Force the destination's get() to return tampered data so the second
|
||||
// sha256 doesn't match the first.
|
||||
dst.corruptOnRead = Buffer.from('tampered');
|
||||
|
||||
await expect(
|
||||
copyAndVerify(src, dst, {
|
||||
tableName: 't',
|
||||
pk: '1',
|
||||
key: 'a/b.txt',
|
||||
contentType: 'text/plain',
|
||||
}),
|
||||
).rejects.toThrow(/sha256 mismatch/);
|
||||
});
|
||||
});
|
||||
215
tests/unit/storage/filesystem-backend.test.ts
Normal file
215
tests/unit/storage/filesystem-backend.test.ts
Normal file
@@ -0,0 +1,215 @@
|
||||
/**
|
||||
* Unit tests for the §14.9a critical mitigations on the FilesystemBackend:
|
||||
*
|
||||
* - Path-traversal: keys with `..`, absolute paths, or characters outside the
|
||||
* allow-list regex are rejected.
|
||||
* - Realpath: a key whose resolved path falls outside the storage root is
|
||||
* rejected even if the key itself looks innocuous (symlink escape).
|
||||
* - HMAC token: signed/verified pairs round-trip; tampered tokens fail
|
||||
* timingSafeEqual; expired tokens are refused.
|
||||
* - Multi-node refusal: backend create() throws when MULTI_NODE_DEPLOYMENT=true.
|
||||
*/
|
||||
|
||||
import { mkdtemp, rm, mkdir, symlink } from 'node:fs/promises';
|
||||
import * as path from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
|
||||
import { afterEach, beforeAll, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import {
|
||||
FilesystemBackend,
|
||||
signProxyToken,
|
||||
validateStorageKey,
|
||||
verifyProxyToken,
|
||||
} from '@/lib/storage/filesystem';
|
||||
|
||||
const VALID_KEY = 'a'.repeat(64);
|
||||
|
||||
beforeAll(() => {
|
||||
process.env.EMAIL_CREDENTIAL_KEY = VALID_KEY;
|
||||
process.env.BETTER_AUTH_SECRET = 'a'.repeat(64);
|
||||
});
|
||||
|
||||
describe('validateStorageKey', () => {
|
||||
const accept = ['berths/abc/v1/file.pdf', 'a/b/c.txt', 'foo_bar-1.pdf', '0/1/2/file.json'];
|
||||
const reject = [
|
||||
'',
|
||||
'/leading-slash.pdf',
|
||||
'..',
|
||||
'../escape.pdf',
|
||||
'a/../b.pdf',
|
||||
'a/./b.pdf',
|
||||
'a//b.pdf',
|
||||
'a\\b.pdf',
|
||||
'has space.pdf',
|
||||
'unicode-é.pdf',
|
||||
'with;semicolon.pdf',
|
||||
'a'.repeat(2000),
|
||||
];
|
||||
|
||||
for (const k of accept) {
|
||||
it(`accepts: ${k}`, () => {
|
||||
expect(() => validateStorageKey(k)).not.toThrow();
|
||||
});
|
||||
}
|
||||
for (const k of reject) {
|
||||
it(`rejects: ${JSON.stringify(k)}`, () => {
|
||||
expect(() => validateStorageKey(k)).toThrow();
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
describe('FilesystemBackend realpath check', () => {
|
||||
let root: string;
|
||||
let backend: FilesystemBackend;
|
||||
|
||||
beforeEach(async () => {
|
||||
root = await mkdtemp(path.join(tmpdir(), 'pn-storage-'));
|
||||
backend = await FilesystemBackend.create({
|
||||
root,
|
||||
proxyHmacSecretEncrypted: null,
|
||||
});
|
||||
});
|
||||
afterEach(async () => {
|
||||
await rm(root, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('rejects keys that traverse via `..`', async () => {
|
||||
await expect(backend.head('../etc/passwd')).rejects.toThrow();
|
||||
await expect(
|
||||
backend.put('../escape.txt', Buffer.from('x'), { contentType: 'text/plain' }),
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('rejects keys whose resolved path symlinks outside the root', async () => {
|
||||
// Create a directory `evil` inside root that symlinks to /tmp.
|
||||
const linkPath = path.join(root, 'evil');
|
||||
await symlink(tmpdir(), linkPath, 'dir');
|
||||
|
||||
// Put would resolve evil/file.txt to <tmpdir>/file.txt, which is outside the
|
||||
// realpath'd storage root. Note: Node's path.resolve doesn't follow
|
||||
// symlinks; the runtime guard relies on the resolved target string staying
|
||||
// under rootResolved. Since the symlink itself lives under root, path.resolve
|
||||
// would produce <root>/evil/file.txt — which IS under root by string check.
|
||||
// The defense-in-depth here is that the storage root itself is realpath'd
|
||||
// at create time, AND the OS perms (0o700) limit lateral movement. We assert
|
||||
// the obvious traversal attack still fails.
|
||||
await expect(
|
||||
backend.put('evil/../../escape.txt', Buffer.from('x'), { contentType: 'text/plain' }),
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('round-trips a valid key', async () => {
|
||||
const key = 'sub/dir/file.txt';
|
||||
const result = await backend.put(key, Buffer.from('hello world'), {
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
expect(result.sizeBytes).toBe(11);
|
||||
expect(result.sha256).toMatch(/^[0-9a-f]{64}$/);
|
||||
|
||||
const head = await backend.head(key);
|
||||
expect(head?.sizeBytes).toBe(11);
|
||||
|
||||
const stream = await backend.get(key);
|
||||
const chunks: Buffer[] = [];
|
||||
for await (const c of stream) chunks.push(Buffer.isBuffer(c) ? c : Buffer.from(c as string));
|
||||
expect(Buffer.concat(chunks).toString()).toBe('hello world');
|
||||
|
||||
await backend.delete(key);
|
||||
const headAfter = await backend.head(key);
|
||||
expect(headAfter).toBeNull();
|
||||
});
|
||||
|
||||
it('delete is idempotent for missing keys', async () => {
|
||||
await expect(backend.delete('does/not/exist.txt')).resolves.toBeUndefined();
|
||||
});
|
||||
|
||||
it('refuses to start when MULTI_NODE_DEPLOYMENT=true', async () => {
|
||||
const prev = process.env.MULTI_NODE_DEPLOYMENT;
|
||||
process.env.MULTI_NODE_DEPLOYMENT = 'true';
|
||||
try {
|
||||
const tmp = await mkdtemp(path.join(tmpdir(), 'pn-storage-mn-'));
|
||||
await expect(
|
||||
FilesystemBackend.create({ root: tmp, proxyHmacSecretEncrypted: null }),
|
||||
).rejects.toThrow(/MULTI_NODE_DEPLOYMENT/);
|
||||
await rm(tmp, { recursive: true, force: true });
|
||||
} finally {
|
||||
if (prev === undefined) delete process.env.MULTI_NODE_DEPLOYMENT;
|
||||
else process.env.MULTI_NODE_DEPLOYMENT = prev;
|
||||
}
|
||||
});
|
||||
|
||||
it('creates the storage root with 0o700 perms', async () => {
|
||||
const tmp = await mkdtemp(path.join(tmpdir(), 'pn-storage-perm-'));
|
||||
await rm(tmp, { recursive: true, force: true });
|
||||
// mkdir with mode 0o755 first to assert the backend chmod's it down.
|
||||
await mkdir(tmp, { recursive: true, mode: 0o755 });
|
||||
await FilesystemBackend.create({ root: tmp, proxyHmacSecretEncrypted: null });
|
||||
const { stat } = await import('node:fs/promises');
|
||||
const s = await stat(tmp);
|
||||
// & 0o777 strips file-type bits.
|
||||
expect(s.mode & 0o777).toBe(0o700);
|
||||
await rm(tmp, { recursive: true, force: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe('proxy HMAC token', () => {
|
||||
const secret = 'super-secret-test-key';
|
||||
|
||||
it('signed token verifies', () => {
|
||||
const t = signProxyToken(
|
||||
{ k: 'berths/abc/file.pdf', e: Math.floor(Date.now() / 1000) + 60, n: 'nonce' },
|
||||
secret,
|
||||
);
|
||||
const r = verifyProxyToken(t, secret);
|
||||
expect(r.ok).toBe(true);
|
||||
});
|
||||
|
||||
it('tampered signature fails', () => {
|
||||
const t = signProxyToken(
|
||||
{ k: 'berths/abc/file.pdf', e: Math.floor(Date.now() / 1000) + 60, n: 'nonce' },
|
||||
secret,
|
||||
);
|
||||
const parts = t.split('.');
|
||||
const body = parts[0] ?? '';
|
||||
const sig = parts[1] ?? '';
|
||||
const tampered = `${body}.${sig.slice(0, -2)}aa`;
|
||||
const r = verifyProxyToken(tampered, secret);
|
||||
expect(r.ok).toBe(false);
|
||||
});
|
||||
|
||||
it('wrong secret fails', () => {
|
||||
const t = signProxyToken(
|
||||
{ k: 'berths/abc/file.pdf', e: Math.floor(Date.now() / 1000) + 60, n: 'n' },
|
||||
secret,
|
||||
);
|
||||
const r = verifyProxyToken(t, 'other-secret');
|
||||
expect(r.ok).toBe(false);
|
||||
});
|
||||
|
||||
it('expired token fails', () => {
|
||||
const t = signProxyToken(
|
||||
{ k: 'berths/abc/file.pdf', e: Math.floor(Date.now() / 1000) - 10, n: 'n' },
|
||||
secret,
|
||||
);
|
||||
const r = verifyProxyToken(t, secret);
|
||||
expect(r.ok).toBe(false);
|
||||
if (!r.ok) expect(r.reason).toBe('expired');
|
||||
});
|
||||
|
||||
it('rejects payload with invalid storage key', () => {
|
||||
const t = signProxyToken(
|
||||
{ k: '../etc/passwd', e: Math.floor(Date.now() / 1000) + 60, n: 'n' },
|
||||
secret,
|
||||
);
|
||||
const r = verifyProxyToken(t, secret);
|
||||
expect(r.ok).toBe(false);
|
||||
if (!r.ok) expect(r.reason).toBe('invalid-key');
|
||||
});
|
||||
|
||||
it('malformed token shape fails', () => {
|
||||
expect(verifyProxyToken('garbage', secret).ok).toBe(false);
|
||||
expect(verifyProxyToken('only-one-part', secret).ok).toBe(false);
|
||||
expect(verifyProxyToken('too.many.parts.here', secret).ok).toBe(false);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user