Until now the only bulk action anywhere was Archive on the interests
list — implemented as parallel fan-out with no per-row failure
reporting. The bulk BullMQ worker was a TODO stub with no producers.
- bulk-helpers.runBulk wraps a per-row loop and returns
{results, summary} for the caller. Page-size capped at 100.
- New endpoints: /api/v1/{interests,clients,yachts,companies}/bulk
with a Zod discriminated union over the action. Interests support
change_stage + add_tag + remove_tag + archive; clients/yachts/companies
support archive + add_tag + remove_tag. Each action is permission-gated
individually (delete vs edit vs change_stage).
- interest-list, client-list, yacht-list expose the new actions in the
bulk-action toolbar with dialogs for stage / tag selection. Failure
summaries surface via window.confirm.
- bulkWorker stub gets a docblock explaining the v1 sync-only choice
and what the queue is reserved for (CSV imports, port-wide migrations,
bulk emails to >100 recipients).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
33 lines
1.1 KiB
TypeScript
33 lines
1.1 KiB
TypeScript
import { Worker, type Job } from 'bullmq';
|
|
import { env } from '@/lib/env';
|
|
|
|
import type { ConnectionOptions } from 'bullmq';
|
|
import { logger } from '@/lib/logger';
|
|
import { QUEUE_CONFIGS } from '@/lib/queue';
|
|
|
|
/**
|
|
* v1 of bulk operations runs synchronously through per-entity bulk
|
|
* endpoints (see `/api/v1/interests/bulk`) — a per-row loop, capped at
|
|
* the page size (100). The synchronous path gives the user instant
|
|
* feedback and a per-row failure list, which the queue can't.
|
|
*
|
|
* This worker remains here for genuinely-async cases (CSV imports,
|
|
* port-wide migrations, bulk emails to >100 recipients) where the
|
|
* caller polls for completion. Currently no producer enqueues to this
|
|
* queue — add producers as those use cases surface.
|
|
*/
|
|
export const bulkWorker = new Worker(
|
|
'bulk',
|
|
async (job: Job) => {
|
|
logger.info({ jobId: job.id, jobName: job.name }, 'Processing bulk job');
|
|
},
|
|
{
|
|
connection: { url: env.REDIS_URL } as ConnectionOptions,
|
|
concurrency: QUEUE_CONFIGS.bulk.concurrency,
|
|
},
|
|
);
|
|
|
|
bulkWorker.on('failed', (job, err) => {
|
|
logger.error({ jobId: job?.id, jobName: job?.name, err }, 'Bulk job failed');
|
|
});
|