Files
pn-new-crm/docker-compose.prod.yml
Matt Ciaccio 6a609ecf94 fix(audit-tier-1): timeouts, lifecycle, per-port Documenso, FK constraints
Closes the second wave of HIGH-priority audit findings:

* fetchWithTimeout helper (new src/lib/fetch-with-timeout.ts) wraps
  Documenso, OCR, currency, Umami, IMAP, etc. — a hung upstream can
  no longer pin a worker concurrency slot indefinitely.  OpenAI client
  passes timeout: 30_000.  ImapFlow gets socket / greeting / connection
  timeouts.
* SIGTERM / SIGINT handler in src/server.ts drains in-flight HTTP,
  closes Socket.io, and disconnects Redis before exit; compose
  stop_grace_period bumped to 30s.  Adds closeSocketServer() helper.
* env.ts gains zod-validated PORT and MULTI_NODE_DEPLOYMENT, and
  filesystem.ts now reads from env (a typo can no longer silently
  disable the multi-node guard).
* Per-port Documenso template + recipient IDs land in system_settings
  with env fallback (PortDocumensoConfig now exposes eoiTemplateId,
  clientRecipientId, developerRecipientId, approvalRecipientId).
  document-templates.ts uses the per-port config and threads portId
  into documensoGenerateFromTemplate().
* Migration 0042 wires the eleven HIGH-tier missing FK constraints
  (documents/files/interests/reminders/berth_waiting_list/
  form_submissions) plus polymorphic CHECK round 2
  (yacht_ownership_history.owner_type, document_sends.document_kind),
  invoices.billing_entity_id NOT EMPTY, and clients.merged_into self-FK.
  Drizzle schema columns updated to .references(...) where possible
  so the misleading "FK wired in relations.ts" comments are gone.

Test status: 1168/1168 vitest, tsc clean.

Refs: docs/audit-comprehensive-2026-05-05.md HIGH §§5,6,7,8,9,10 +
MED §§14,15,16,18.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 19:52:58 +02:00

79 lines
2.1 KiB
YAML

services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: port_nimara_crm
POSTGRES_USER: ${DB_USER:-crm}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/01-init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-crm} -d port_nimara_crm"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- internal
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- internal
crm-app:
image: code.letsbe.solutions/letsbe/pn-new-crm/crm-app:latest
env_file: .env
ports:
- "7100:3000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
interval: 15s
timeout: 5s
retries: 3
# Give the SIGTERM handler in src/server.ts time to drain in-flight
# HTTP requests, close Socket.io, and disconnect Redis before Docker
# SIGKILLs the process. The internal hard timeout is 25s.
stop_grace_period: 30s
restart: unless-stopped
networks:
- internal
crm-worker:
image: code.letsbe.solutions/letsbe/pn-new-crm/crm-worker:latest
env_file: .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
# Match the app: BullMQ jobs need time to finish or be released back
# to the queue when worker.ts handles SIGTERM.
stop_grace_period: 30s
restart: unless-stopped
networks:
- internal
volumes:
pgdata:
redisdata:
networks:
internal:
driver: bridge