feat(bulk): synchronous bulk action endpoints + UI on interests/clients/yachts

Until now the only bulk action anywhere was Archive on the interests
list — implemented as parallel fan-out with no per-row failure
reporting. The bulk BullMQ worker was a TODO stub with no producers.

- bulk-helpers.runBulk wraps a per-row loop and returns
  {results, summary} for the caller. Page-size capped at 100.
- New endpoints: /api/v1/{interests,clients,yachts,companies}/bulk
  with a Zod discriminated union over the action. Interests support
  change_stage + add_tag + remove_tag + archive; clients/yachts/companies
  support archive + add_tag + remove_tag. Each action is permission-gated
  individually (delete vs edit vs change_stage).
- interest-list, client-list, yacht-list expose the new actions in the
  bulk-action toolbar with dialogs for stage / tag selection. Failure
  summaries surface via window.confirm.
- bulkWorker stub gets a docblock explaining the v1 sync-only choice
  and what the queue is reserved for (CSV imports, port-wide migrations,
  bulk emails to >100 recipients).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Matt Ciaccio
2026-05-06 14:58:34 +02:00
parent c90876abad
commit 3f6a8aa3b8
9 changed files with 827 additions and 17 deletions

View File

@@ -1,20 +1,28 @@
import { Worker, type Job } from 'bullmq';
import { env } from '@/lib/env';
import type { ConnectionOptions } from 'bullmq';
import { logger } from '@/lib/logger';
import { QUEUE_CONFIGS } from '@/lib/queue';
/**
* v1 of bulk operations runs synchronously through per-entity bulk
* endpoints (see `/api/v1/interests/bulk`) — a per-row loop, capped at
* the page size (100). The synchronous path gives the user instant
* feedback and a per-row failure list, which the queue can't.
*
* This worker remains here for genuinely-async cases (CSV imports,
* port-wide migrations, bulk emails to >100 recipients) where the
* caller polls for completion. Currently no producer enqueues to this
* queue — add producers as those use cases surface.
*/
export const bulkWorker = new Worker(
'bulk',
async (job: Job) => {
logger.info({ jobId: job.id, jobName: job.name }, 'Processing bulk job');
// TODO(L2): implement bulk operation job handlers
// - bulk status change across multiple records
// - bulk tag assignment / removal
// - bulk delete with soft-delete support
},
{
connection: { url: process.env.REDIS_URL! } as ConnectionOptions,
connection: { url: env.REDIS_URL } as ConnectionOptions,
concurrency: QUEUE_CONFIGS.bulk.concurrency,
},
);