fix(audit-v3): platform-wide deferred-list cleanup (rounds 1-4)
Working through the audit-v2 deferred backlog. Each round was tested
(typecheck + 1168/1168 vitest) before moving on.
Round 1 — DB performance + AI cost visibility:
- Add missing FK indexes Postgres doesn't auto-create on
berth_reservations.{interest_id, contract_file_id},
documents.{file_id, signed_file_id}, document_events.signer_id,
document_templates.source_file_id, form_submissions.{form_template_id,
client_id}, document_sends.{brochure_id, brochure_version_id,
sent_by_user_id}. Without these, RESTRICT-checks on parent delete +
reverse-lookups walk the child tables fully. Migration 0037.
- AI worker now writes one ai_usage_ledger row per OpenAI call so admins
can audit spend per port/user/feature and future per-port budgets have
history to read from. Failure to write is logged-not-thrown so the
user-facing email draft is unaffected.
Round 2 — Boot-time + transport hardening:
- S3 backend verifies the bucket exists at startup (or auto-creates
when MINIO_AUTO_CREATE_BUCKET=true). A typo'd bucket name now
surfaces with a clear boot error instead of a vague Minio error
inside the first user-facing request.
- Documenso v1 placeFields: 3-attempt exponential-backoff retry on 5xx
+ network errors, fail-fast on 4xx. Stops one transient flake from
leaving a document with a partial field set.
- FilesystemBackend logs a structured warn-once at boot when the dev
HMAC fallback is in effect, so two processes started with different
BETTER_AUTH_SECRET values are observable (random 401s on file
downloads otherwise).
- Logger redact paths extended to cover *.headers.{authorization,
cookie}, *.config.headers.authorization, encrypted-credential blobs
(secretKeyEncrypted, smtpPassEncrypted, etc.), the Documenso
X-Documenso-Secret header, and 2-level nested forms.
Round 3 — UI feedback + permission gates:
- Storage admin migrate dialog: success toast with row count + error
toast on both dryRun and migrate mutations.
- Invoice detail Send + Record-payment buttons wrapped in
PermissionGate (invoices.send / invoices.record_payment); both
mutations now toast on success/error.
- Admin user list Edit button wrapped in PermissionGate(admin.manage_users).
- Scan-receipt page surfaces an amber warning when OCR fails so reps
know they can fill the form manually instead of staring at a stalled
spinner; the editable form now also opens on scanMutation.isError
/ uploadedFile, not only on success.
- Email threads list now renders skeleton rows during load + shared
EmptyState for the empty case (was a single "Loading…" line).
Round 4 — Service / route correctness:
- documentSends.sent_by_user_id was a free-text NOT NULL column with no
FK. Now nullable + FK to user(id) ON DELETE SET NULL so the audit row
survives a user being hard-deleted. Migration 0038 with a defensive
null-out for any orphan ids before attaching the constraint.
- Saved-views route: documented why withAuth alone is correct (the
service strictly filters by (portId, userId) — owner-only by design).
- Public-interests audit log: replaced "userId: null as unknown as
string" cast with userId: null; AuditLogParams already accepts null
for system-generated events.
- EOI in-app PDF fill: extracted setBerthRange() that, when the
AcroForm field is missing AND the context has a non-empty range
string, logs a structured warn so the deployment gap (live Documenso
template needs the field) is observable instead of silently dropping
the multi-berth range.
Test status: 1168/1168 vitest. tsc clean. Two new migrations
(0037/0038) need pnpm db:push (or migration apply) on the dev DB.
Deferred-doc updated with the remaining open items (bigger refactors).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
38
src/lib/db/migrations/0037_missing_fk_indexes.sql
Normal file
38
src/lib/db/migrations/0037_missing_fk_indexes.sql
Normal file
@@ -0,0 +1,38 @@
|
||||
-- Audit-final v2 follow-up: cover the FK columns Postgres doesn't auto-index.
|
||||
-- Without these, deleting a parent row (or RESTRICT-checking on update) walks
|
||||
-- the child table fully. CREATE INDEX IF NOT EXISTS keeps the migration safe
|
||||
-- to re-run.
|
||||
|
||||
-- berth_reservations
|
||||
CREATE INDEX IF NOT EXISTS idx_br_interest
|
||||
ON berth_reservations(interest_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_br_contract_file
|
||||
ON berth_reservations(contract_file_id);
|
||||
|
||||
-- documents (file FKs)
|
||||
CREATE INDEX IF NOT EXISTS idx_docs_file_id
|
||||
ON documents(file_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_docs_signed_file_id
|
||||
ON documents(signed_file_id);
|
||||
|
||||
-- document_events
|
||||
CREATE INDEX IF NOT EXISTS idx_de_signer
|
||||
ON document_events(signer_id);
|
||||
|
||||
-- document_templates
|
||||
CREATE INDEX IF NOT EXISTS idx_dt_source_file
|
||||
ON document_templates(source_file_id);
|
||||
|
||||
-- form_submissions
|
||||
CREATE INDEX IF NOT EXISTS idx_fs_template
|
||||
ON form_submissions(form_template_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_fs_client
|
||||
ON form_submissions(client_id);
|
||||
|
||||
-- document_sends
|
||||
CREATE INDEX IF NOT EXISTS idx_ds_brochure
|
||||
ON document_sends(brochure_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_ds_brochure_version
|
||||
ON document_sends(brochure_version_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_ds_sent_by
|
||||
ON document_sends(sent_by_user_id);
|
||||
@@ -0,0 +1,23 @@
|
||||
-- Audit-final v2 follow-up: document_sends.sent_by_user_id was a free-text
|
||||
-- column with no FK. If a user is hard-deleted (rare; we soft-delete), an
|
||||
-- orphan id remained without any ON DELETE handling. Add the FK with
|
||||
-- SET NULL semantics so the audit row keeps recipient + timestamp + body
|
||||
-- even when the originating user is removed.
|
||||
|
||||
-- Drop the NOT NULL so SET NULL is legal.
|
||||
ALTER TABLE document_sends
|
||||
ALTER COLUMN sent_by_user_id DROP NOT NULL;
|
||||
|
||||
-- Defensive: if any historical rows have sent_by_user_id values that don't
|
||||
-- match an existing user (dev-only), null them out so the FK can attach.
|
||||
UPDATE document_sends
|
||||
SET sent_by_user_id = NULL
|
||||
WHERE sent_by_user_id IS NOT NULL
|
||||
AND sent_by_user_id NOT IN (SELECT id FROM "user");
|
||||
|
||||
ALTER TABLE document_sends
|
||||
ADD CONSTRAINT document_sends_sent_by_user_id_user_id_fk
|
||||
FOREIGN KEY (sent_by_user_id) REFERENCES "user"(id) ON DELETE SET NULL;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_ds_sent_by
|
||||
ON document_sends(sent_by_user_id);
|
||||
@@ -260,6 +260,20 @@
|
||||
"when": 1778100000000,
|
||||
"tag": "0036_polymorphic_check_constraints",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 37,
|
||||
"version": "7",
|
||||
"when": 1778150000000,
|
||||
"tag": "0037_missing_fk_indexes",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 38,
|
||||
"version": "7",
|
||||
"when": 1778200000000,
|
||||
"tag": "0038_document_sends_sent_by_user_fk",
|
||||
"breakpoints": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ import { ports } from './ports';
|
||||
import { clients } from './clients';
|
||||
import { interests } from './interests';
|
||||
import { berths } from './berths';
|
||||
import { user } from './users';
|
||||
|
||||
/**
|
||||
* Port-wide brochures (Phase 7 — see plan §3.3 / §4.8).
|
||||
@@ -123,7 +124,12 @@ export const documentSends = pgTable(
|
||||
}),
|
||||
/** Exact body used (after merge-field expansion + sanitization). */
|
||||
bodyMarkdown: text('body_markdown'),
|
||||
sentByUserId: text('sent_by_user_id').notNull(),
|
||||
/**
|
||||
* better-auth user id of the sender. SET NULL on user delete so the
|
||||
* audit row keeps `recipientEmail` + timestamp + body for compliance
|
||||
* even when the originating user is removed from the system.
|
||||
*/
|
||||
sentByUserId: text('sent_by_user_id').references(() => user.id, { onDelete: 'set null' }),
|
||||
fromAddress: text('from_address').notNull(),
|
||||
sentAt: timestamp('sent_at', { withTimezone: true }).notNull().defaultNow(),
|
||||
/** SMTP provider message-id for deliverability tracking. */
|
||||
@@ -143,6 +149,11 @@ export const documentSends = pgTable(
|
||||
index('idx_ds_interest').on(t.interestId, t.sentAt),
|
||||
index('idx_ds_berth').on(t.berthId, t.sentAt),
|
||||
index('idx_ds_port').on(t.portId, t.sentAt),
|
||||
// Reverse-lookups: "what sends used this brochure / version" and
|
||||
// FK-RESTRICT scans on brochure delete.
|
||||
index('idx_ds_brochure').on(t.brochureId),
|
||||
index('idx_ds_brochure_version').on(t.brochureVersionId),
|
||||
index('idx_ds_sent_by').on(t.sentByUserId),
|
||||
],
|
||||
);
|
||||
|
||||
|
||||
@@ -80,6 +80,11 @@ export const documents = pgTable(
|
||||
index('idx_docs_reservation').on(table.reservationId),
|
||||
index('idx_docs_type').on(table.portId, table.documentType),
|
||||
index('idx_docs_status_port').on(table.portId, table.status),
|
||||
// Cover the file FKs Postgres doesn't auto-index. Without these,
|
||||
// deleting (or RESTRICT-checking) a referenced files row scans
|
||||
// the documents table fully.
|
||||
index('idx_docs_file_id').on(table.fileId),
|
||||
index('idx_docs_signed_file_id').on(table.signedFileId),
|
||||
],
|
||||
);
|
||||
|
||||
@@ -122,6 +127,8 @@ export const documentEvents = pgTable(
|
||||
},
|
||||
(table) => [
|
||||
index('idx_de_doc').on(table.documentId),
|
||||
// Reverse-lookup signer→events without scanning the events table.
|
||||
index('idx_de_signer').on(table.signerId),
|
||||
uniqueIndex('idx_de_dedup')
|
||||
.on(table.documentId, table.signatureHash)
|
||||
.where(sql`${table.signatureHash} IS NOT NULL`),
|
||||
@@ -161,6 +168,7 @@ export const documentTemplates = pgTable(
|
||||
(table) => [
|
||||
index('idx_dt_port').on(table.portId),
|
||||
index('idx_dt_type').on(table.portId, table.templateType),
|
||||
index('idx_dt_source_file').on(table.sourceFileId),
|
||||
],
|
||||
);
|
||||
|
||||
@@ -221,7 +229,11 @@ export const formSubmissions = pgTable(
|
||||
submittedAt: timestamp('submitted_at', { withTimezone: true }),
|
||||
createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
|
||||
},
|
||||
(table) => [uniqueIndex('idx_fs_token').on(table.token)],
|
||||
(table) => [
|
||||
uniqueIndex('idx_fs_token').on(table.token),
|
||||
index('idx_fs_template').on(table.formTemplateId),
|
||||
index('idx_fs_client').on(table.clientId),
|
||||
],
|
||||
);
|
||||
|
||||
export type File = typeof files.$inferSelect;
|
||||
|
||||
@@ -41,6 +41,11 @@ export const berthReservations = pgTable(
|
||||
index('idx_br_client').on(table.clientId),
|
||||
index('idx_br_yacht').on(table.yachtId),
|
||||
index('idx_br_port').on(table.portId),
|
||||
// Cover the FKs Postgres doesn't auto-index. Without these, deleting
|
||||
// (or restrict-checking) the parent interest / contract file row
|
||||
// requires a full scan of berth_reservations.
|
||||
index('idx_br_interest').on(table.interestId),
|
||||
index('idx_br_contract_file').on(table.contractFileId),
|
||||
uniqueIndex('idx_br_active')
|
||||
.on(table.berthId)
|
||||
.where(sql`${table.status} = 'active'`),
|
||||
|
||||
@@ -15,6 +15,31 @@ export const logger = pino({
|
||||
'*.secret',
|
||||
'*.accessKey',
|
||||
'*.secretKey',
|
||||
// Encrypted credential blobs surface in storage / smtp config logs
|
||||
// unintentionally; redact them defensively even though they're
|
||||
// already AES-encrypted at rest.
|
||||
'*.secretKeyEncrypted',
|
||||
'*.smtpPassEncrypted',
|
||||
'*.imapPassEncrypted',
|
||||
'*.proxyHmacSecretEncrypted',
|
||||
// HTTP authorization headers (Bearer tokens, Basic creds) leak via
|
||||
// err.config.headers on http-client error logs.
|
||||
'*.headers.authorization',
|
||||
'*.headers.Authorization',
|
||||
'*.headers["x-documenso-secret"]',
|
||||
'*.config.headers.Authorization',
|
||||
'*.config.headers.authorization',
|
||||
// Cookie headers can carry session tokens.
|
||||
'*.headers.cookie',
|
||||
'*.headers.Cookie',
|
||||
// Two-level nesting for things like `req.headers.authorization` or
|
||||
// `cfg.s3.secretKeyEncrypted`.
|
||||
'*.*.password',
|
||||
'*.*.token',
|
||||
'*.*.secret',
|
||||
'*.*.secretKeyEncrypted',
|
||||
'*.*.headers.authorization',
|
||||
'*.*.headers.Authorization',
|
||||
],
|
||||
censor: '[REDACTED]',
|
||||
},
|
||||
|
||||
@@ -4,6 +4,7 @@ import path from 'node:path';
|
||||
import { PDFDocument } from 'pdf-lib';
|
||||
|
||||
import type { EoiContext } from '@/lib/services/eoi-context';
|
||||
import { logger } from '@/lib/logger';
|
||||
|
||||
/**
|
||||
* Source PDF for the in-app EOI pathway. Must contain AcroForm fields whose
|
||||
@@ -48,6 +49,28 @@ function setText(form: ReturnType<PDFDocument['getForm']>, name: string, value:
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Special-cased setter for the multi-berth `Berth Range` field. When the
|
||||
* caller has a non-empty range and the AcroForm field is missing, we log
|
||||
* a warning so the deployment gap is observable (the in-app pathway is
|
||||
* intentionally tolerant of older PDF templates, but ops needs to know
|
||||
* when ranges are silently dropped — otherwise a customer's multi-berth
|
||||
* EOI ships with only the primary mooring visible).
|
||||
*/
|
||||
function setBerthRange(form: ReturnType<PDFDocument['getForm']>, value: string): void {
|
||||
try {
|
||||
form.getTextField('Berth Range').setText(value);
|
||||
} catch {
|
||||
if (value && value.trim().length > 0) {
|
||||
logger.warn(
|
||||
{ berthRange: value },
|
||||
'EOI in-app PDF template is missing the "Berth Range" AcroForm field — ' +
|
||||
'multi-berth bundle range string was dropped. Update the source template.',
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function setCheckbox(
|
||||
form: ReturnType<PDFDocument['getForm']>,
|
||||
name: string,
|
||||
@@ -88,9 +111,12 @@ export async function fillEoiFormFields(
|
||||
setText(form, 'Draft', context.yacht?.draftFt ?? '');
|
||||
setText(form, 'Berth Number', context.berth?.mooringNumber ?? '');
|
||||
// Multi-berth EOI: compact range string from the interest's EOI bundle.
|
||||
// Falls back silently when the AcroForm field doesn't exist (older
|
||||
// template revisions without the field still fill cleanly).
|
||||
setText(form, 'Berth Range', context.eoiBerthRange);
|
||||
// The AcroForm field may be absent on an older template revision —
|
||||
// when the context HAS a non-empty range string but the field is
|
||||
// missing we surface a structured warning so the deployment gap is
|
||||
// observable (the CRM dataset has multi-berth bundles but the live
|
||||
// PDF template needs the field added before they render correctly).
|
||||
setBerthRange(form, context.eoiBerthRange);
|
||||
|
||||
setCheckbox(form, 'Purchase', true);
|
||||
setCheckbox(form, 'Lease_10', false);
|
||||
|
||||
@@ -9,6 +9,40 @@ import { QUEUE_CONFIGS } from '@/lib/queue';
|
||||
const MAX_OUTPUT_BYTES = 10 * 1024; // 10 KB
|
||||
const OPENAI_TIMEOUT_MS = 30_000; // 30 s
|
||||
|
||||
interface RecordAiUsageArgs {
|
||||
portId: string;
|
||||
userId: string;
|
||||
feature: string;
|
||||
provider: 'openai' | 'claude' | 'tesseract';
|
||||
model: string;
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
totalTokens: number;
|
||||
requestId: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert one ai_usage_ledger row per provider call. Best-effort — the
|
||||
* draft generation is the user-facing artefact, the ledger is
|
||||
* observability. Imports are lazy so this module loads cleanly inside
|
||||
* the worker bundle without dragging the DB layer in at import time.
|
||||
*/
|
||||
async function recordAiUsage(args: RecordAiUsageArgs): Promise<void> {
|
||||
const { db } = await import('@/lib/db');
|
||||
const { aiUsageLedger } = await import('@/lib/db/schema/ai-usage');
|
||||
await db.insert(aiUsageLedger).values({
|
||||
portId: args.portId,
|
||||
userId: args.userId,
|
||||
feature: args.feature,
|
||||
provider: args.provider,
|
||||
model: args.model,
|
||||
inputTokens: args.inputTokens,
|
||||
outputTokens: args.outputTokens,
|
||||
totalTokens: args.totalTokens,
|
||||
requestId: args.requestId,
|
||||
});
|
||||
}
|
||||
|
||||
interface GenerateEmailDraftPayload {
|
||||
interestId: string;
|
||||
clientId: string;
|
||||
@@ -150,7 +184,13 @@ async function generateEmailDraft(payload: GenerateEmailDraftPayload): Promise<D
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
id?: string;
|
||||
choices: Array<{ message: { content: string } }>;
|
||||
usage?: {
|
||||
prompt_tokens?: number;
|
||||
completion_tokens?: number;
|
||||
total_tokens?: number;
|
||||
};
|
||||
};
|
||||
|
||||
const content = data.choices[0]?.message?.content ?? '{}';
|
||||
@@ -160,6 +200,24 @@ async function generateEmailDraft(payload: GenerateEmailDraftPayload): Promise<D
|
||||
throw new Error('AI output exceeded 10 KB cap');
|
||||
}
|
||||
|
||||
// Record token usage so admins can audit spend + future per-port
|
||||
// budget caps have a history to read from. Failure here must not
|
||||
// bubble up — the email draft is the user-facing artefact, the
|
||||
// ledger is observability.
|
||||
void recordAiUsage({
|
||||
portId,
|
||||
userId: payload.requestedBy,
|
||||
feature: 'reply_draft',
|
||||
provider: 'openai',
|
||||
model: 'gpt-4o-mini',
|
||||
inputTokens: data.usage?.prompt_tokens ?? 0,
|
||||
outputTokens: data.usage?.completion_tokens ?? 0,
|
||||
totalTokens: data.usage?.total_tokens ?? 0,
|
||||
requestId: data.id ?? null,
|
||||
}).catch((err) => {
|
||||
logger.warn({ err, interestId }, 'Failed to record AI usage ledger row');
|
||||
});
|
||||
|
||||
const parsed = JSON.parse(content) as { subject?: string; body?: string };
|
||||
subject = parsed.subject ?? `Follow-up: ${client.fullName}`;
|
||||
body = parsed.body ?? '';
|
||||
|
||||
@@ -382,18 +382,38 @@ export async function placeFields(
|
||||
pageWidth: Math.round((f.pageWidth / 100) * dims.width),
|
||||
pageHeight: Math.round((f.pageHeight / 100) * dims.height),
|
||||
};
|
||||
const res = await fetch(`${baseUrl}/api/v1/documents/${docId}/fields`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Authorization: `Bearer ${apiKey}`,
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
if (!res.ok) {
|
||||
const err = await res.text();
|
||||
logger.error({ docId, status: res.status, err, portId }, 'Documenso v1 placeField error');
|
||||
throw new Error(`Documenso v1 placeField error: ${res.status}`);
|
||||
// Retry transient failures so one flaky 5xx mid-loop doesn't leave
|
||||
// the document with a partial field set. 3 attempts at 250 / 500 /
|
||||
// 1000 ms; 4xx responses (validation errors) fail-fast.
|
||||
let lastError: { status: number; body: string } | null = null;
|
||||
for (let attempt = 0; attempt < 3; attempt += 1) {
|
||||
const res = await fetch(`${baseUrl}/api/v1/documents/${docId}/fields`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Authorization: `Bearer ${apiKey}`,
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
if (res.ok) {
|
||||
lastError = null;
|
||||
break;
|
||||
}
|
||||
const errBody = await res.text().catch(() => '');
|
||||
lastError = { status: res.status, body: errBody };
|
||||
// Don't retry on 4xx — that's a validation error, won't change.
|
||||
if (res.status >= 400 && res.status < 500) break;
|
||||
// Backoff: 250ms, 500ms (skipped on the 3rd iteration because we exit).
|
||||
if (attempt < 2) {
|
||||
await new Promise((r) => setTimeout(r, 250 * Math.pow(2, attempt)));
|
||||
}
|
||||
}
|
||||
if (lastError) {
|
||||
logger.error(
|
||||
{ docId, status: lastError.status, err: lastError.body, portId },
|
||||
'Documenso v1 placeField error',
|
||||
);
|
||||
throw new Error(`Documenso v1 placeField error: ${lastError.status}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -362,7 +362,21 @@ function resolveHmacSecret(encryptedSecret: string | null): string {
|
||||
// Dev fallback: derive a stable per-process secret so the filesystem
|
||||
// backend works without explicit configuration during local development.
|
||||
const seed = process.env.BETTER_AUTH_SECRET ?? env.BETTER_AUTH_SECRET ?? 'storage-default';
|
||||
return createHash('sha256').update(`storage-proxy:${seed}`).digest('hex');
|
||||
const derived = createHash('sha256').update(`storage-proxy:${seed}`).digest('hex');
|
||||
// Warn once at boot so two processes started with different
|
||||
// `BETTER_AUTH_SECRET` values are observable: tokens minted by one
|
||||
// wouldn't validate on the other otherwise — which surfaces as random
|
||||
// 401s on file downloads in dev.
|
||||
logger.warn(
|
||||
{
|
||||
hint:
|
||||
'Storage proxy HMAC derived from BETTER_AUTH_SECRET. ' +
|
||||
'Multi-process dev setups must share the same secret value.',
|
||||
secretFingerprint: derived.slice(0, 8),
|
||||
},
|
||||
'FilesystemBackend: using DEV HMAC fallback (no storage_proxy_hmac_secret_encrypted set)',
|
||||
);
|
||||
return derived;
|
||||
}
|
||||
|
||||
async function streamToBuffer(stream: NodeJS.ReadableStream): Promise<Buffer> {
|
||||
|
||||
@@ -107,6 +107,37 @@ export class S3Backend implements StorageBackend {
|
||||
secretKey: resolved.secretKey,
|
||||
region: resolved.region,
|
||||
});
|
||||
// Verify the bucket exists at boot so a typo / missing-bucket admin
|
||||
// error surfaces with a clear message instead of as a vague Minio
|
||||
// error inside the first user-facing request that touches storage.
|
||||
// Logged-not-thrown when MINIO_AUTO_CREATE_BUCKET=true and the bucket
|
||||
// is missing — we'll create it. Otherwise we throw so the boot fails
|
||||
// fast and the deployment-time misconfig is loud.
|
||||
try {
|
||||
const exists = await client.bucketExists(resolved.bucket);
|
||||
if (!exists) {
|
||||
if (process.env.MINIO_AUTO_CREATE_BUCKET === 'true') {
|
||||
await client.makeBucket(resolved.bucket, resolved.region);
|
||||
logger.info(
|
||||
{ bucket: resolved.bucket, endpoint: resolved.endpoint },
|
||||
'S3 bucket auto-created (MINIO_AUTO_CREATE_BUCKET=true)',
|
||||
);
|
||||
} else {
|
||||
throw new Error(
|
||||
`S3 bucket "${resolved.bucket}" does not exist on ${resolved.endpoint}. ` +
|
||||
`Create it manually or set MINIO_AUTO_CREATE_BUCKET=true.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.message.includes('does not exist')) throw err;
|
||||
// Connection / auth errors get re-thrown with extra context.
|
||||
logger.error(
|
||||
{ err, bucket: resolved.bucket, endpoint: resolved.endpoint },
|
||||
'S3 bucket existence check failed at backend boot',
|
||||
);
|
||||
throw err;
|
||||
}
|
||||
return new S3Backend(client, resolved.bucket);
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user