Initial commit: Port Nimara CRM (Layers 0-4)
Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:52:51 +01:00
|
|
|
import { Worker, type Job } from 'bullmq';
|
2026-04-27 21:58:14 +02:00
|
|
|
import { and, eq, lt } from 'drizzle-orm';
|
Initial commit: Port Nimara CRM (Layers 0-4)
Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:52:51 +01:00
|
|
|
|
|
|
|
|
import type { ConnectionOptions } from 'bullmq';
|
2026-04-27 21:58:14 +02:00
|
|
|
import { db } from '@/lib/db';
|
|
|
|
|
import { formSubmissions } from '@/lib/db/schema/documents';
|
Initial commit: Port Nimara CRM (Layers 0-4)
Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:52:51 +01:00
|
|
|
import { logger } from '@/lib/logger';
|
|
|
|
|
import { QUEUE_CONFIGS } from '@/lib/queue';
|
|
|
|
|
|
|
|
|
|
export const maintenanceWorker = new Worker(
|
|
|
|
|
'maintenance',
|
|
|
|
|
async (job: Job) => {
|
|
|
|
|
logger.info({ jobId: job.id, jobName: job.name }, 'Processing maintenance job');
|
|
|
|
|
switch (job.name) {
|
|
|
|
|
case 'currency-refresh': {
|
|
|
|
|
const { refreshRates } = await import('@/lib/services/currency');
|
|
|
|
|
await refreshRates();
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
case 'form-expiry-check': {
|
2026-04-27 21:58:14 +02:00
|
|
|
const result = await db
|
|
|
|
|
.update(formSubmissions)
|
|
|
|
|
.set({ status: 'expired' })
|
|
|
|
|
.where(
|
|
|
|
|
and(eq(formSubmissions.status, 'pending'), lt(formSubmissions.expiresAt, new Date())),
|
|
|
|
|
)
|
|
|
|
|
.returning({ id: formSubmissions.id });
|
|
|
|
|
logger.info({ expired: result.length }, 'Form expiry check complete');
|
Initial commit: Port Nimara CRM (Layers 0-4)
Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:52:51 +01:00
|
|
|
break;
|
|
|
|
|
}
|
feat(alerts): rule engine, recurring evaluator, socket fanout
PR2 of Phase B. Wires the alert framework end-to-end:
- alert-rules.ts: 10 rule evaluators implemented as pure async fns over
the existing schema. reservation.no_agreement, interest.stale,
document.signer_overdue, berth.under_offer_stalled, expense.duplicate,
expense.unscanned, interest.high_value_silent, eoi.unsigned_long,
audit.suspicious_login fire against real conditions.
document.expiring_soon stays inert until the documents schema gets an
expires_at column. audit.suspicious_login also stays inert until the
auth layer logs 'login.failed' rows (TODO noted in the rule body).
- alert-engine.ts: runAlertEngine() walks every port × every rule and
calls reconcileAlertsForPort. Errors per (port, rule) are collected
in the summary, not thrown — one bad evaluator can't stop the sweep.
- alerts.service.ts: reconcileAlertsForPort now emits 'alert:created'
socket events on insert and 'alert:resolved' on auto-resolve;
dismissAlert emits 'alert:dismissed'. All scoped to port:{portId}
rooms.
- socket/events.ts: adds the three Server→Client alert event types.
- queue/scheduler.ts: registers 'alerts-evaluate' on the maintenance
queue with cron */5 * * * * (every 5 min, per spec risk register).
- queue/workers/maintenance.ts: dispatches 'alerts-evaluate' to
runAlertEngine; logs sweep summary.
Tests:
- tests/integration/alerts-engine.test.ts (6 cases): seeds reservation
→ fires, runs twice → no dupe, adds agreement → auto-resolves; seeds
stale interest → fires; hot lead silent → critical; engine summary
shape on no-data port. Socket emit module is vi.mocked.
Vitest 681/681 (was 675; +6). tsc clean. Lint clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:50:55 +02:00
|
|
|
case 'alerts-evaluate': {
|
|
|
|
|
const { runAlertEngine } = await import('@/lib/services/alert-engine');
|
|
|
|
|
const summary = await runAlertEngine();
|
|
|
|
|
logger.info(summary, 'Alert engine sweep complete');
|
|
|
|
|
break;
|
|
|
|
|
}
|
feat(analytics): real computations + 15-min snapshot refresh job
PR3 of Phase B. Replaces the no-op stubs in analytics.service.ts with
working drizzle queries and adds the recurring BullMQ job that warms
the cache.
Computations:
- computePipelineFunnel: groups interests by pipeline_stage filtered by
port + range + not archived; emits 8-row stages array with conversion
pct relative to 'open' as the funnel top.
- computeOccupancyTimeline: per day in range, counts berths covered by
an active reservation (start_date ≤ day, end_date IS NULL OR ≥ day);
emits {date, occupied, total, occupancyPct}.
- computeRevenueBreakdown: sums invoices.total grouped by status +
currency; filters out archived rows.
- computeLeadSourceAttribution: counts interests by source descending;
null source bucketed as 'unspecified'.
Public API (getPipelineFunnel, getOccupancyTimeline, etc.) reads
analytics_snapshots first; falls back to compute + writeSnapshot. TTL
15 minutes (matches the cron interval).
Cron:
- queue/scheduler.ts registers 'analytics-refresh' on maintenance with
pattern '*/15 * * * *'.
- queue/workers/maintenance.ts dispatches to refreshSnapshotsForPort
for every port; per-port try/catch so one bad port doesn't kill the
sweep.
Tests: tests/integration/analytics-service.test.ts (9 cases). Pipeline
funnel math (incl. zero state), occupancy timeline shape/percentages
with seeded reservations, revenue grouped by status + currency, lead
source attribution incl. null bucketing, cache hit (mutate snapshot
directly → next read returns mutated value), refreshSnapshotsForPort
warms every metric×range combo.
Vitest 690/690 (+9). tsc + lint clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:54:46 +02:00
|
|
|
case 'analytics-refresh': {
|
|
|
|
|
const { ports } = await import('@/lib/db/schema/ports');
|
|
|
|
|
const { refreshSnapshotsForPort } = await import('@/lib/services/analytics.service');
|
|
|
|
|
const allPorts = await db.select({ id: ports.id }).from(ports);
|
|
|
|
|
for (const p of allPorts) {
|
|
|
|
|
try {
|
|
|
|
|
await refreshSnapshotsForPort(p.id);
|
|
|
|
|
} catch (err) {
|
|
|
|
|
logger.warn({ portId: p.id, err }, 'Analytics refresh failed for port');
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
logger.info({ count: allPorts.length }, 'Analytics snapshot refresh complete');
|
|
|
|
|
break;
|
|
|
|
|
}
|
Initial commit: Port Nimara CRM (Layers 0-4)
Full CRM rebuild with Next.js 15, TypeScript, Tailwind, Drizzle ORM,
PostgreSQL, Redis, BullMQ, MinIO, and Socket.io. Includes 461 source
files covering clients, berths, interests/pipeline, documents/EOI,
expenses/invoices, email, notifications, dashboard, admin, and
client portal. CI/CD via Gitea Actions with Docker builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:52:51 +01:00
|
|
|
default:
|
|
|
|
|
logger.warn({ jobName: job.name }, 'Unknown maintenance job');
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
connection: { url: process.env.REDIS_URL! } as ConnectionOptions,
|
|
|
|
|
concurrency: QUEUE_CONFIGS.maintenance.concurrency,
|
|
|
|
|
},
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
maintenanceWorker.on('failed', (job, err) => {
|
|
|
|
|
logger.error({ jobId: job?.id, jobName: job?.name, err }, 'Maintenance job failed');
|
|
|
|
|
});
|