Adds the live dedup pipeline on top of the P1 library + P3 migration
script. The new `client/interest` model now actively prevents duplicate
client records at creation time and gives admins a queue to triage
the borderline pairs the at-create check missed.
Three layers, per design §7:
Layer 1 — At-create suggestion
==============================
`GET /api/v1/clients/match-candidates`
Accepts free-text email / phone / name from the in-flight client
form, normalizes them via the dedup library, and returns scored
matches against the port's live client pool. Filters out
low-confidence noise (the background scoring queue picks those up
separately). Strict port scoping; never leaks across tenants.
`<DedupSuggestionPanel>` (`src/components/clients/dedup-suggestion-panel.tsx`)
Debounced React Query hook. Renders nothing for short inputs or
no useful match. On a high-confidence match it interrupts visually
with an amber-tinted card and a "Use this client" primary button.
Medium confidence falls back to a softer "possible match — check
before creating" treatment.
`<ClientForm>`
Renders the panel above the form (create path only — skipped on
edit). New `onUseExistingClient` callback fires when the user
picks the existing client; the form closes and the parent decides
what to do (typically: navigate to that client's detail page or
open the create-interest dialog pre-filled).
Layer 2 — Merge service
=======================
`mergeClients` (`src/lib/services/client-merge.service.ts`)
The atomic merge primitive that everything else calls. Single
transaction. Per §6 of the design:
- Locks both rows (FOR UPDATE) so concurrent merges of the same
loser fail with a clear error rather than racing.
- Snapshots the full loser state (contacts / addresses / notes /
tags / interest+reservation IDs / relationship rows) into the
`client_merge_log.merge_details` JSONB column for the eventual
undo flow.
- Reattaches every loser-side row to the winner: interests,
reservations, contacts (skipping duplicates by `(channel, value)`),
addresses, notes, tags (deduped), relationships.
- Optional `fieldChoices` — per-scalar overrides letting the user
keep the loser's value for fullName / nationality / preferences /
timezone / source.
- Marks the loser archived with `mergedIntoClientId` set (a redirect
pointer for stragglers; never hard-deleted within the undo window).
- Resolves any matching `client_merge_candidates` row to status='merged'.
- Writes audit log entry.
Schema additions:
- `clients.merged_into_client_id` (nullable text, indexed) — the
redirect pointer set on archive.
Tests: 6 cases against a real DB — happy path moves rows + writes log;
self-merge / cross-port / already-merged refused; duplicate-contact
deduped on reattach; fieldChoices copies loser values to winner.
Layer 3 — Admin review queue
============================
`GET /api/v1/admin/duplicates`
Pending merge candidates (status='pending') for the current port,
with both client summaries hydrated for side-by-side rendering.
Skips pairs where one side is already archived/merged.
`POST /api/v1/admin/duplicates/[id]/merge`
Confirms a candidate. Body picks the winner; the other side
becomes the loser. Calls into `mergeClients` — the only path that
writes `client_merge_log`.
`POST /api/v1/admin/duplicates/[id]/dismiss`
Marks the candidate dismissed. Future scoring runs skip the same
pair until a score change recreates the row.
`<DuplicatesReviewQueue>` (`/admin/duplicates`)
Side-by-side card UI for each pending pair. Click a card to pick
the winner; the other side is automatically the loser. Toolbar:
"Merge into selected" + "Dismiss". No per-field merge editor in
this PR — that's a future polish; the simple "pick the better row"
flow handles ~80% of cases.
Test coverage
=============
11 new integration tests (76 added in this branch total):
- 6 mergeClients (atomicity, refusal cases, contact dedup,
fieldChoices)
- 5 match-candidates API (shape, port scoping, confidence tiers,
Pattern F false-positive guard)
Full vitest: 926/926 passing (was 858 before the dedup branch).
Lint: clean. tsc: clean for new files (only pre-existing errors in
unrelated `tests/integration/` files remain, same as before this PR).
Out of scope, deferred
======================
- Background scoring cron that populates `client_merge_candidates`
(the queue is empty until this lands; manual seeding works for
now via the at-create flow).
- Side-by-side per-field merge editor with checkboxes (the simple
"pick the winner" UX shipped here covers ~80% of real cases).
- Admin settings UI for tuning the dedup thresholds. Defaults from
the design (90 / 50) are baked in for now.
- `unmergeClients` (the snapshot is captured in client_merge_log;
the undo endpoint just hasn't been wired yet).
These are all natural follow-up PRs that don't block shipping the
runtime UX.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
161 lines
4.9 KiB
TypeScript
161 lines
4.9 KiB
TypeScript
import { NextResponse } from 'next/server';
|
|
import { and, eq, inArray } from 'drizzle-orm';
|
|
|
|
import type { AuthContext } from '@/lib/api/helpers';
|
|
import { db } from '@/lib/db';
|
|
import { clients, clientMergeCandidates } from '@/lib/db/schema/clients';
|
|
import { errorResponse, NotFoundError } from '@/lib/errors';
|
|
import {
|
|
listPendingMergeCandidates,
|
|
mergeClients,
|
|
type MergeFieldChoices,
|
|
} from '@/lib/services/client-merge.service';
|
|
|
|
/**
|
|
* GET /api/v1/admin/duplicates
|
|
*
|
|
* Pending merge candidates for the current port, sorted by score.
|
|
* Each row hydrates its two client summaries so the review-queue UI
|
|
* can render side-by-side cards without an N+1 fetch.
|
|
*/
|
|
export async function listHandler(_req: Request, ctx: AuthContext): Promise<NextResponse> {
|
|
try {
|
|
const pairs = await listPendingMergeCandidates(ctx.portId);
|
|
if (pairs.length === 0) return NextResponse.json({ data: [] });
|
|
|
|
const ids = Array.from(new Set(pairs.flatMap((p) => [p.clientAId, p.clientBId])));
|
|
const clientRows = await db
|
|
.select({
|
|
id: clients.id,
|
|
fullName: clients.fullName,
|
|
archivedAt: clients.archivedAt,
|
|
mergedIntoClientId: clients.mergedIntoClientId,
|
|
createdAt: clients.createdAt,
|
|
})
|
|
.from(clients)
|
|
.where(inArray(clients.id, ids));
|
|
const clientById = new Map(clientRows.map((c) => [c.id, c]));
|
|
|
|
const data = pairs
|
|
.map((p) => {
|
|
const a = clientById.get(p.clientAId);
|
|
const b = clientById.get(p.clientBId);
|
|
if (!a || !b) return null; // FK orphan — shouldn't happen, but be defensive
|
|
// Skip pairs where one side has already been merged or archived.
|
|
if (a.mergedIntoClientId || b.mergedIntoClientId) return null;
|
|
return {
|
|
id: p.id,
|
|
score: p.score,
|
|
reasons: p.reasons,
|
|
createdAt: p.createdAt,
|
|
clientA: { id: a.id, fullName: a.fullName, createdAt: a.createdAt },
|
|
clientB: { id: b.id, fullName: b.fullName, createdAt: b.createdAt },
|
|
};
|
|
})
|
|
.filter((row): row is NonNullable<typeof row> => row !== null);
|
|
|
|
return NextResponse.json({ data });
|
|
} catch (error) {
|
|
return errorResponse(error);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* POST /api/v1/admin/duplicates/[id]/merge
|
|
*
|
|
* Body: { winnerId: string, fieldChoices?: MergeFieldChoices }
|
|
*
|
|
* Confirms a merge candidate. The winner is the one the user picked
|
|
* to keep; the other side becomes the loser. Calls into the merge
|
|
* service which is the only path that touches client_merge_log.
|
|
*/
|
|
export async function confirmMergeHandler(
|
|
req: Request,
|
|
ctx: AuthContext,
|
|
params: { id?: string },
|
|
): Promise<NextResponse> {
|
|
try {
|
|
const id = params.id ?? '';
|
|
const body = (await req.json().catch(() => ({}))) as {
|
|
winnerId?: string;
|
|
fieldChoices?: MergeFieldChoices;
|
|
};
|
|
if (!body.winnerId) {
|
|
return NextResponse.json({ error: 'winnerId required' }, { status: 400 });
|
|
}
|
|
|
|
const [candidate] = await db
|
|
.select()
|
|
.from(clientMergeCandidates)
|
|
.where(
|
|
and(
|
|
eq(clientMergeCandidates.id, id),
|
|
eq(clientMergeCandidates.portId, ctx.portId),
|
|
eq(clientMergeCandidates.status, 'pending'),
|
|
),
|
|
);
|
|
if (!candidate) throw new NotFoundError('Merge candidate');
|
|
|
|
const loserId =
|
|
body.winnerId === candidate.clientAId
|
|
? candidate.clientBId
|
|
: body.winnerId === candidate.clientBId
|
|
? candidate.clientAId
|
|
: null;
|
|
if (!loserId) {
|
|
return NextResponse.json(
|
|
{ error: 'winnerId must match one of the candidate clients' },
|
|
{ status: 400 },
|
|
);
|
|
}
|
|
|
|
const result = await mergeClients({
|
|
winnerId: body.winnerId,
|
|
loserId,
|
|
mergedBy: ctx.userId,
|
|
fieldChoices: body.fieldChoices,
|
|
});
|
|
|
|
return NextResponse.json({ data: result });
|
|
} catch (error) {
|
|
return errorResponse(error);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* POST /api/v1/admin/duplicates/[id]/dismiss
|
|
*
|
|
* Mark a merge candidate as dismissed. The background scoring job
|
|
* skips dismissed pairs on subsequent runs (a future score increase
|
|
* can re-create them).
|
|
*/
|
|
export async function dismissHandler(
|
|
_req: Request,
|
|
ctx: AuthContext,
|
|
params: { id?: string },
|
|
): Promise<NextResponse> {
|
|
try {
|
|
const id = params.id ?? '';
|
|
const result = await db
|
|
.update(clientMergeCandidates)
|
|
.set({
|
|
status: 'dismissed',
|
|
resolvedAt: new Date(),
|
|
resolvedBy: ctx.userId,
|
|
})
|
|
.where(
|
|
and(
|
|
eq(clientMergeCandidates.id, id),
|
|
eq(clientMergeCandidates.portId, ctx.portId),
|
|
eq(clientMergeCandidates.status, 'pending'),
|
|
),
|
|
)
|
|
.returning({ id: clientMergeCandidates.id });
|
|
|
|
if (result.length === 0) throw new NotFoundError('Merge candidate');
|
|
return NextResponse.json({ data: { id: result[0]!.id, status: 'dismissed' } });
|
|
} catch (error) {
|
|
return errorResponse(error);
|
|
}
|
|
}
|