Files
pn-new-crm/src/app/api/public/berths/[mooringNumber]/route.ts

109 lines
3.8 KiB
TypeScript
Raw Normal View History

feat(berths): public berths API + health env-match endpoint Adds the read-only public-website data feed promised by plan §4.5 and §7.3. The marketing site's `getBerths()` swap is now a one-line URL change against the existing 5-min TTL behaviour. - src/app/api/public/berths/route.ts: GET / unauth, returns the full port-nimara berth list as { list, pageInfo } in the verbatim NocoDB shape ("Mooring Number", "Side Pontoon", quoted-key fields). Cache: s-maxage=300 + stale-while-revalidate=60. portSlug query param lets future ports opt in. - src/app/api/public/berths/[mooringNumber]/route.ts: GET single. Up- front regex validation (^[A-Z]+\\d+$) rejects malformed lookups with 400 + cache-control:no-store before hitting the DB. 404 + no-store when not found. - src/app/api/public/health/route.ts: returns { status, env, appUrl, timestamp } so the marketing site can refuse to start when its CRM_PUBLIC_URL points at a different deployment env (§14.8 critical env-mismatch protection). - src/lib/services/public-berths.ts: pure mapper with derivePublicStatus ("sold" wins; otherwise specific-interest junction OR status='under_offer' -> "Under Offer"; else "Available"). - 11 unit tests covering numeric coercion, status derivation, archived-berth handling, missing-map-data omission, and the status-precedence rule that "sold" trumps the specific-interest signal. Smoke-tested: /api/public/berths -> 117 rows, A1 correctly shows "Under Offer" (has interest_berths.is_specific_interest=true link), INVALID -> 400, Z99 -> 404. Total tests: 996 -> 1007. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:52:44 +02:00
import { NextResponse } from 'next/server';
import { and, eq, isNull } from 'drizzle-orm';
import { db } from '@/lib/db';
import { ports } from '@/lib/db/schema/ports';
import { berths, berthMapData } from '@/lib/db/schema/berths';
import { interestBerths, interests } from '@/lib/db/schema/interests';
import { logger } from '@/lib/logger';
import { toPublicBerth } from '@/lib/services/public-berths';
/**
* GET /api/public/berths/[mooringNumber]
*
* Single-berth lookup for the public website's `/berths/[number]`
* page. Mooring numbers are matched against the canonical bare form
* ("A1", "B12") - Phase 0 normalized the entire CRM dataset.
*/
fix(audit-final): pre-merge hardening + expense receipt UI Final audit pass on feat/berth-recommender (3 parallel Opus agents) caught 5 critical and ~12 high-severity findings. All addressed in-branch; medium/low items deferred to docs/audit-final-deferred.md. Critical: - Add filesystem-backend PUT handler at /api/storage/[token] so presigned uploads stop 405-ing in filesystem mode (every browser-driven berth-PDF + brochure upload was broken). Same token-verify + replay protection as GET, plus magic-byte gate when c=application/pdf. - Forward req.signal into streamExpensePdf so an aborted 1000-receipt export no longer keeps grinding for minutes. - Strengthen Content-Disposition filename sanitization: \s matches CR/LF which would let documentName forge headers; restrict to [\w. -]+ and add filename* RFC 5987 fallback. - Lock public berths feed behind an explicit slug allowlist instead of ?portSlug= enumeration. - Reject cross-port interest_berths upserts (defense-in-depth on top of the recommender SQL port filter). High: - Recommender: width-only feasibility now caps length via L/W ratio so a 200ft berth doesn't surface for a 30ft beam request; total_interest_count filters out junction rows whose interest is in another port. - Mooring normalization follow-up migration (0034) catches un-hyphenated padded forms (A01) the original 0024 WHERE missed. - Send-out rate limit moved AFTER validation and scoped per-(port, user) so typos don't burn a slot and a multi-port rep can't be DoS'd by another tenant. - Default-brochure path now blocks an archived row from sneaking through the partial unique index. - NocoDB import --update-snapshot honoured under --dry-run so reps can refresh the seed JSON without committing DB writes. - PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when expenseIds are passed (was bypassed); flag rate-unavailable rows with an amber footer instead of silently treating them as 1:1; skip the USD->EUR chain when source already matches target. - expense-form-dialog: revokeObjectURL captures the URL in the closure instead of revoking the still-displayed one; reset upload state on close. - scan/page: handleClearReceipt resets in-flight scan/upload mutations; Save disabled while upload pending. - updateExpense re-asserts receipt-or-acknowledgement at the merged row so PATCH can't slip past the create-time refine. Plus the in-progress receipt upload UI for the expense form dialog (receipt picker + "I have no receipt" checkbox + warning banner) and a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration. Includes the canonical plan doc (referenced in CLAUDE.md), the handoff prompt, and a deferred-findings index for follow-up issues. 1163/1163 vitest passing. Typecheck clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:11:26 +02:00
// Hard-coded allowlist for the public read-only feed. Adding a port here
// is a deliberate decision (not silent enumeration via ?portSlug=), so a
// future private tenant can't be exposed by accident.
const PUBLIC_PORT_SLUGS = new Set(['port-nimara']);
feat(berths): public berths API + health env-match endpoint Adds the read-only public-website data feed promised by plan §4.5 and §7.3. The marketing site's `getBerths()` swap is now a one-line URL change against the existing 5-min TTL behaviour. - src/app/api/public/berths/route.ts: GET / unauth, returns the full port-nimara berth list as { list, pageInfo } in the verbatim NocoDB shape ("Mooring Number", "Side Pontoon", quoted-key fields). Cache: s-maxage=300 + stale-while-revalidate=60. portSlug query param lets future ports opt in. - src/app/api/public/berths/[mooringNumber]/route.ts: GET single. Up- front regex validation (^[A-Z]+\\d+$) rejects malformed lookups with 400 + cache-control:no-store before hitting the DB. 404 + no-store when not found. - src/app/api/public/health/route.ts: returns { status, env, appUrl, timestamp } so the marketing site can refuse to start when its CRM_PUBLIC_URL points at a different deployment env (§14.8 critical env-mismatch protection). - src/lib/services/public-berths.ts: pure mapper with derivePublicStatus ("sold" wins; otherwise specific-interest junction OR status='under_offer' -> "Under Offer"; else "Available"). - 11 unit tests covering numeric coercion, status derivation, archived-berth handling, missing-map-data omission, and the status-precedence rule that "sold" trumps the specific-interest signal. Smoke-tested: /api/public/berths -> 117 rows, A1 correctly shows "Under Offer" (has interest_berths.is_specific_interest=true link), INVALID -> 400, Z99 -> 404. Total tests: 996 -> 1007. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:52:44 +02:00
const DEFAULT_PUBLIC_PORT_SLUG = 'port-nimara';
const RESPONSE_HEADERS = {
'cache-control': 'public, s-maxage=300, stale-while-revalidate=60',
'content-type': 'application/json; charset=utf-8',
};
const MOORING_PATTERN = /^[A-Z]+\d+$/;
export async function GET(
request: Request,
ctx: { params: Promise<{ mooringNumber: string }> },
): Promise<Response> {
const { mooringNumber } = await ctx.params;
const url = new URL(request.url);
fix(audit-final): pre-merge hardening + expense receipt UI Final audit pass on feat/berth-recommender (3 parallel Opus agents) caught 5 critical and ~12 high-severity findings. All addressed in-branch; medium/low items deferred to docs/audit-final-deferred.md. Critical: - Add filesystem-backend PUT handler at /api/storage/[token] so presigned uploads stop 405-ing in filesystem mode (every browser-driven berth-PDF + brochure upload was broken). Same token-verify + replay protection as GET, plus magic-byte gate when c=application/pdf. - Forward req.signal into streamExpensePdf so an aborted 1000-receipt export no longer keeps grinding for minutes. - Strengthen Content-Disposition filename sanitization: \s matches CR/LF which would let documentName forge headers; restrict to [\w. -]+ and add filename* RFC 5987 fallback. - Lock public berths feed behind an explicit slug allowlist instead of ?portSlug= enumeration. - Reject cross-port interest_berths upserts (defense-in-depth on top of the recommender SQL port filter). High: - Recommender: width-only feasibility now caps length via L/W ratio so a 200ft berth doesn't surface for a 30ft beam request; total_interest_count filters out junction rows whose interest is in another port. - Mooring normalization follow-up migration (0034) catches un-hyphenated padded forms (A01) the original 0024 WHERE missed. - Send-out rate limit moved AFTER validation and scoped per-(port, user) so typos don't burn a slot and a multi-port rep can't be DoS'd by another tenant. - Default-brochure path now blocks an archived row from sneaking through the partial unique index. - NocoDB import --update-snapshot honoured under --dry-run so reps can refresh the seed JSON without committing DB writes. - PDF export: orderBy desc(expenseDate); apply isNull(archivedAt) when expenseIds are passed (was bypassed); flag rate-unavailable rows with an amber footer instead of silently treating them as 1:1; skip the USD->EUR chain when source already matches target. - expense-form-dialog: revokeObjectURL captures the URL in the closure instead of revoking the still-displayed one; reset upload state on close. - scan/page: handleClearReceipt resets in-flight scan/upload mutations; Save disabled while upload pending. - updateExpense re-asserts receipt-or-acknowledgement at the merged row so PATCH can't slip past the create-time refine. Plus the in-progress receipt upload UI for the expense form dialog (receipt picker + "I have no receipt" checkbox + warning banner) and a noReceiptAcknowledged flag on ExpenseRow for edit-mode hydration. Includes the canonical plan doc (referenced in CLAUDE.md), the handoff prompt, and a deferred-findings index for follow-up issues. 1163/1163 vitest passing. Typecheck clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:11:26 +02:00
const requestedSlug = url.searchParams.get('portSlug') ?? DEFAULT_PUBLIC_PORT_SLUG;
if (!PUBLIC_PORT_SLUGS.has(requestedSlug)) {
return NextResponse.json(
{ error: 'port is not part of the public berths feed', portSlug: requestedSlug },
{ status: 404, headers: { 'cache-control': 'no-store' } },
);
}
const portSlug = requestedSlug;
feat(berths): public berths API + health env-match endpoint Adds the read-only public-website data feed promised by plan §4.5 and §7.3. The marketing site's `getBerths()` swap is now a one-line URL change against the existing 5-min TTL behaviour. - src/app/api/public/berths/route.ts: GET / unauth, returns the full port-nimara berth list as { list, pageInfo } in the verbatim NocoDB shape ("Mooring Number", "Side Pontoon", quoted-key fields). Cache: s-maxage=300 + stale-while-revalidate=60. portSlug query param lets future ports opt in. - src/app/api/public/berths/[mooringNumber]/route.ts: GET single. Up- front regex validation (^[A-Z]+\\d+$) rejects malformed lookups with 400 + cache-control:no-store before hitting the DB. 404 + no-store when not found. - src/app/api/public/health/route.ts: returns { status, env, appUrl, timestamp } so the marketing site can refuse to start when its CRM_PUBLIC_URL points at a different deployment env (§14.8 critical env-mismatch protection). - src/lib/services/public-berths.ts: pure mapper with derivePublicStatus ("sold" wins; otherwise specific-interest junction OR status='under_offer' -> "Under Offer"; else "Available"). - 11 unit tests covering numeric coercion, status derivation, archived-berth handling, missing-map-data omission, and the status-precedence rule that "sold" trumps the specific-interest signal. Smoke-tested: /api/public/berths -> 117 rows, A1 correctly shows "Under Offer" (has interest_berths.is_specific_interest=true link), INVALID -> 400, Z99 -> 404. Total tests: 996 -> 1007. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:52:44 +02:00
// Reject obviously malformed mooring numbers up front so cache poisoning
// / random-URL probing returns 400 rather than 404 (saves a DB hit).
if (!MOORING_PATTERN.test(mooringNumber)) {
return NextResponse.json(
{ error: 'invalid mooring number', mooringNumber },
{ status: 400, headers: { 'cache-control': 'no-store' } },
);
}
const [port] = await db
.select({ id: ports.id })
.from(ports)
.where(eq(ports.slug, portSlug))
.limit(1);
if (!port) {
return NextResponse.json(
{ error: 'port not found', portSlug },
{ status: 404, headers: { 'cache-control': 'no-store' } },
);
}
const [berth] = await db
.select()
.from(berths)
.where(and(eq(berths.portId, port.id), eq(berths.mooringNumber, mooringNumber)))
.limit(1);
if (!berth) {
return NextResponse.json(
{ error: 'berth not found', mooringNumber },
{ status: 404, headers: { 'cache-control': 'no-store' } },
);
}
const [mapData, specificInterestRows] = await Promise.all([
db.select().from(berthMapData).where(eq(berthMapData.berthId, berth.id)).limit(1),
db
.select({ berthId: interestBerths.berthId })
.from(interestBerths)
.innerJoin(interests, eq(interests.id, interestBerths.interestId))
.where(
and(
eq(interestBerths.berthId, berth.id),
eq(interestBerths.isSpecificInterest, true),
isNull(interests.archivedAt),
fix(audit): post-review hardening across phases 0-7 15 of 17 findings from the consolidated audit (3 reviewer agents on the previously-shipped phase commits). Remaining two are nice-to-have follow-ups deferred. Critical (data integrity / security): - Public berths API: closed-deal junction rows no longer flip a berth to "Under Offer" - filter on `interests.outcome IS NULL` so won/ lost/cancelled don't pollute public-map status. Both list + single-mooring routes. - Recommender heat: cancelled outcomes now count as fall-throughs (SQL was `LIKE 'lost%'` which silently dropped them, leaving cancelled-only berths stuck in tier A). - Filesystem presignDownload returns an absolute URL (origin from APP_URL) so emailed download links resolve from external mail clients. - Magic-byte verification on the presigned-PUT path: both per-berth PDFs and brochures stream the first 5 bytes via the storage backend and reject + delete on `%PDF-` mismatch (was only enforced when the server saw the buffer; presign-PUT was wide open). - Replay-protection TTL aligned to the token's own expiry (was a fixed 30 min, but send-out tokens live 24 h). Floor 60 s, ceiling 25 days. - Brochures unique partial index on (port_id) WHERE is_default=true + 0032 migration. Closes the read-then-write race in the create/ update transactions. Important: - Recommender SQL: defense-in-depth `i.port_id = $portId` filter on the aggregates CTE. - berth-pdf service: per-berth pg_advisory_xact_lock around the version-number SELECT + insert. Storage key is now UUID-based so concurrent uploads can't collide on blob paths. Replaces `nextVersionNumber` with the tx-bound variant. - berth-pdf apply: rejects with ConflictError when parse_results contain a mooring-mismatch warning unless the caller passes `confirmMooringMismatch: true` (force-reconfirm gate was UI-only). - Send-out body: HTML-escape brochure filename in the download-link fallback (XSS guard). - parseDecimalWithUnit rejects negative numbers. - listClients DISTINCT ON for primary contact resolution: bounds contact-row count to ~2 per client. Defensive: - verifyProxyToken rejects NaN/Infinity expiries via Number.isFinite. - Replaced sql ANY() with inArray() in interest-berths. Tests: 1145 -> 1163 passing. Deferred: bulk-send rate limit (no bulk endpoint today), markdown italic regex breaking links with asterisks (cosmetic). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:07:03 +02:00
// Closed deals (won/lost/cancelled) don't promote to "Under
// Offer" - won flows through berths.status='sold' handled in
// derivePublicStatus; lost/cancelled means back on the market.
isNull(interests.outcome),
feat(berths): public berths API + health env-match endpoint Adds the read-only public-website data feed promised by plan §4.5 and §7.3. The marketing site's `getBerths()` swap is now a one-line URL change against the existing 5-min TTL behaviour. - src/app/api/public/berths/route.ts: GET / unauth, returns the full port-nimara berth list as { list, pageInfo } in the verbatim NocoDB shape ("Mooring Number", "Side Pontoon", quoted-key fields). Cache: s-maxage=300 + stale-while-revalidate=60. portSlug query param lets future ports opt in. - src/app/api/public/berths/[mooringNumber]/route.ts: GET single. Up- front regex validation (^[A-Z]+\\d+$) rejects malformed lookups with 400 + cache-control:no-store before hitting the DB. 404 + no-store when not found. - src/app/api/public/health/route.ts: returns { status, env, appUrl, timestamp } so the marketing site can refuse to start when its CRM_PUBLIC_URL points at a different deployment env (§14.8 critical env-mismatch protection). - src/lib/services/public-berths.ts: pure mapper with derivePublicStatus ("sold" wins; otherwise specific-interest junction OR status='under_offer' -> "Under Offer"; else "Available"). - 11 unit tests covering numeric coercion, status derivation, archived-berth handling, missing-map-data omission, and the status-precedence rule that "sold" trumps the specific-interest signal. Smoke-tested: /api/public/berths -> 117 rows, A1 correctly shows "Under Offer" (has interest_berths.is_specific_interest=true link), INVALID -> 400, Z99 -> 404. Total tests: 996 -> 1007. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:52:44 +02:00
),
)
.limit(1),
]);
const out = toPublicBerth(berth, mapData[0] ?? null, specificInterestRows.length > 0);
if (out.Status !== 'Available' && out.Status !== 'Under Offer' && out.Status !== 'Sold') {
logger.error({ berthId: berth.id, status: out.Status }, 'Public berth status out of range');
return NextResponse.json({ error: 'internal' }, { status: 500 });
}
return new Response(JSON.stringify(out), { headers: RESPONSE_HEADERS, status: 200 });
}