15 KiB
Architecture & Decisions
Overview
This document captures the core architectural decisions for the MOPC platform redesign. The redesign replaces the current Pipeline → Track → Stage model with a flatter Competition → Round model, introduces typed configurations per round type, and promotes juries, submission windows, and deliberation to first-class entities.
Current Problems
| Problem | Impact |
|---|---|
| 3-level nesting (Pipeline → Track → Stage) | Cognitive overhead for admins; unnecessary abstraction for a linear flow |
Generic configJson blobs per stage type |
No type safety; hard to know what's configurable without reading code |
| No explicit jury entities | Juries are implicit (per-stage assignments); can't manage "Jury 1" as an entity |
| Single submission round | No way to open a second submission window for semi-finalists |
| Track layer for main flow | MAIN track adds indirection with no value for a linear competition |
| No mentoring workspace | Mentor file exchange exists but no comments, messaging, or promotion to submission |
| No winner confirmation/deliberation | No multi-party agreement step to cement winners |
| Missing round types | Can't model "Semi-finalist Submission", "Mentoring", or "Deliberation" steps |
Before & After Architecture
BEFORE (Current System)
Program
└── Pipeline (generic container)
├── Track: "Main Competition" (MAIN)
│ ├── Stage: "Intake" (INTAKE, configJson: {...})
│ ├── Stage: "Filtering" (FILTER, configJson: {...})
│ ├── Stage: "Evaluation" (EVALUATION, configJson: {...})
│ ├── Stage: "Selection" (SELECTION, configJson: {...})
│ ├── Stage: "Live Finals" (LIVE_FINAL, configJson: {...})
│ └── Stage: "Results" (RESULTS, configJson: {...})
├── Track: "Award 1" (AWARD)
└── Track: "Award 2" (AWARD)
Juries: implicit (per-stage assignments, no named entity)
Submissions: single round (one INTAKE stage)
Mentoring: basic (messages + notes, no workspace)
Winner confirmation: none
AFTER (Redesigned System)
Program
└── Competition (replaces Pipeline, purpose-built)
├── Rounds (linear sequence, replaces Track + Stage):
│ ├── R1: "Application Window" ─────── (INTAKE)
│ ├── R2: "AI Screening" ──────────── (FILTERING)
│ ├── R3: "Jury 1 - Semi-finalist" ── (EVALUATION) ── juryGroupId: jury-1
│ ├── R4: "Semi-finalist Docs" ─────── (SUBMISSION)
│ ├── R5: "Jury 2 - Finalist" ──────── (EVALUATION) ── juryGroupId: jury-2
│ ├── R6: "Finalist Mentoring" ─────── (MENTORING)
│ ├── R7: "Live Finals" ────────────── (LIVE_FINAL) ── juryGroupId: jury-3
│ └── R8: "Deliberation" ───────────── (DELIBERATION)
│
├── Jury Groups (explicit, named entities):
│ ├── "Jury 1" ── members, caps, ratios ── linked to R3
│ ├── "Jury 2" ── members, caps, ratios ── linked to R5
│ └── "Jury 3" ── members ── linked to R7 + R8
│
├── Submission Windows (multi-round):
│ ├── Window 1: "Round 1 Docs" ── requirements: [Exec Summary, Business Plan]
│ └── Window 2: "Round 2 Docs" ── requirements: [Updated Plan, Video Pitch]
│
└── Special Awards (standalone entities):
├── "Innovation Award" ── mode: STAY_IN_MAIN
└── "Impact Award" ── mode: SEPARATE_POOL
Juries: first-class JuryGroup entities with members, caps, ratios
Submissions: multi-round with per-window file requirements
Mentoring: full workspace with messaging, file exchange, comments, promotion
Deliberation: structured voting with multiple modes, tie-breaking, result lock
Guiding Principles
| # | Principle | Description |
|---|---|---|
| 1 | Domain over abstraction | Models map directly to competition concepts (Jury 1, Round 2, Submission Window). No unnecessary intermediate layers. |
| 2 | Linear by default | The main competition flow is sequential. Branching exists only for special awards (standalone entities). |
| 3 | Typed configs over JSON blobs | Each round type has an explicit Zod-validated configuration schema. No more guessing what fields are available. |
| 4 | Explicit entities | Juries, submission windows, deliberation sessions, and mentor workspaces are first-class database models. |
| 5 | Admin override everywhere | Any automated decision can be manually overridden with full audit trail. |
| 6 | Deep integration | Jury groups link to rounds, rounds link to submissions, submissions link to evaluations. No orphaned features. |
| 7 | No silent contract drift | All schema changes, config shape changes, and behavior changes go through review. Late-stage changes require explicit architecture sign-off. |
| 8 | Score independence | Juries evaluate independently. Cross-jury visibility is admin-configurable, not automatic. |
Architecture Decision Records (ADRs)
ADR-01: Eliminate the Track Layer
Decision: Remove the Track model entirely. The main competition is a flat sequence of Rounds. Special awards become standalone entities with routing modes.
Rationale: The MOPC competition has one main flow (R1→R8). The Track concept (MAIN/AWARD/SHOWCASE with RoutingMode and DecisionMode) was designed for branching flows that don't exist. Awards don't need their own track — they're parallel processes that reference the same projects.
Impact:
Trackmodel deletedTrackKind,RoutingModeenums deletedProjectStageState.trackIdremoved- Special awards modeled as standalone
SpecialAwardentities withSTAY_IN_MAIN/SEPARATE_POOLmodes
ADR-02: Rename Pipeline → Competition, Stage → Round
Decision: Rename Pipeline to Competition and Stage to Round throughout the codebase.
Rationale: "Competition" and "Round" map directly to how admins and participants think about the system. A competition has rounds. This reduces cognitive overhead in every conversation, document, and UI label.
Impact:
- All database models, tRPC routers, services, and UI pages renamed
- Migration adds new tables, backfills, then drops old tables
- See 10-migration-strategy.md for full mapping
ADR-03: Typed Configuration per Round Type
Decision: Replace the generic configJson: Json blob with 7 typed Zod schemas — one per RoundType.
Rationale: configJson provided flexibility at the cost of discoverability and safety. Developers and admins couldn't know what fields were available without reading service code. Typed configs give compile-time and runtime validation.
Impact:
- 7 Zod schemas:
IntakeConfig,FilteringConfig,EvaluationConfig,SubmissionConfig,MentoringConfig,LiveFinalConfig,DeliberationConfig configJsonfield preserved for storage but validated against the appropriate schema on read/write- Admin UI round configuration forms generated from schema definitions
- See 02-data-model.md for full schema definitions
ADR-04: JuryGroup as First-Class Entity
Decision: Create JuryGroup and JuryGroupMember models as named, manageable entities with caps, ratios, and policy configuration.
Rationale: Currently, juries are implicit — a set of assignments for a stage. This makes it impossible to manage "Jury 1" as a thing, configure per-juror caps, or support jury member overlap across rounds. Making JuryGroups explicit enables a dedicated "Juries" admin section.
Impact:
- New
JuryGroupmodel with label, competition binding, and default policies - New
JuryGroupMembermodel with per-member cap overrides and ratio preferences - Members can belong to multiple JuryGroups (Jury 1 + Jury 2 + award juries)
- See 04-jury-groups-and-assignment-policy.md for details
ADR-05: Deliberation as Confirmation
Decision: Replace the WinnerProposal → jury sign-off → admin approval confirmation flow with a structured deliberation voting system. Deliberation IS the confirmation — no separate step needed.
Rationale: The original confirmation flow required unanimous jury agreement plus admin approval, which is rigid. The deliberation model supports two configurable voting modes (SINGLE_WINNER_VOTE and FULL_RANKING), multiple tie-breaking methods, admin override, and per-category independence — all while serving as the final agreement mechanism.
Impact:
WinnerProposal,WinnerApprovalmodels removed- New models:
DeliberationSession,DeliberationVote,DeliberationResult,DeliberationParticipant - New enums:
DeliberationMode,DeliberationStatus,TieBreakMethod,DeliberationParticipantStatus ResultLockmodel cements the outcome- See 07-live-finals-and-deliberation.md for the full deliberation specification
ADR-06: Score Independence with Configurable Visibility
Decision: Juries are fully independent during active evaluation. During Live Finals and Deliberation, prior jury data is visible to Jury 3 only if admin enables showPriorJuryData. All cross-jury data is available in reports.
Rationale: Independent evaluation prevents bias during scoring. However, during the live finals phase, Jury 3 may benefit from seeing the evaluation history. This is a judgment call that should be made by the program admin per competition.
Impact:
- No cross-jury queries in jury evaluation pages
showPriorJuryDatatoggle onLiveFinalConfigandDeliberationSession- Reports section has full cross-jury analytics for internal use
- See 03-competition-flow.md cross-cutting behaviors
ADR-07: Top-N Configurable Winner Model
Decision: Winners are Top N (configurable, default 3) per category, all projects ranked within their category, with podium UI and cross-category comparison view.
Rationale: Different competitions may want different numbers of winners. All finalist projects should be ranked regardless — this provides maximum flexibility for award ceremonies and reporting.
Impact:
topNfield inDeliberationConfigfinalRankfield inDeliberationResult- UI: podium display for top 3, full ranking table, cross-category comparison view
ADR-08: 5-Layer Policy Precedence
Decision: Assignment and configuration policies resolve through a 5-layer precedence chain.
Rationale: Different levels of the system need to set defaults while allowing overrides. A judge might have a program-wide cap default but need a competition-specific override.
| Layer | Scope | Example |
|---|---|---|
| 1. System default | Platform-wide | softCapBuffer = 10 |
| 2. Program default | Per-program settings | defaultCapMode = SOFT |
| 3. Jury group default | Per-JuryGroup | maxProjectsPerJuror = 15 |
| 4. Per-member override | Individual judge | judge-A.maxProjects = 20 |
| 5. Admin override | Always wins | force-assign project X to judge-A |
Impact:
- Policy resolution function consulted during assignment
- Admin overrides logged to
DecisionAuditLog - See 04-jury-groups-and-assignment-policy.md for implementation
ADR-09: AI Ranked Shortlist at Every Evaluation Round
Decision: AI generates a recommended ranked shortlist per category at the end of every evaluation round (Jury 1, Jury 2, and any award evaluation). Admin can always override.
Rationale: Consistent AI assistance across all evaluation rounds reduces admin workload and provides data-driven recommendations.
Impact:
generateAiShortlistflag inEvaluationConfig- AI shortlist service invoked at round completion
- Admin UI shows AI recommendations alongside manual selection controls
ADR-10: Assignment Intent Lifecycle Management
Decision: Track assignment intents through a full lifecycle (PENDING → HONORED → OVERRIDDEN → EXPIRED → CANCELLED) rather than a simple "create and forget" approach.
Rationale: Intents created at invite time may not be fulfilled due to COI conflicts, cap limits, or admin changes. Without lifecycle tracking, stale intents accumulate with no visibility into why they weren't honored. The lifecycle state machine provides clear audit trails and enables admin dashboards showing intent fulfillment rates.
Impact:
AssignmentIntentStatusenum with 5 states (all terminal except PENDING)- Algorithm must check and honor PENDING intents before general assignment
- Round close triggers batch expiry of unmatched intents
- See 04-jury-groups-and-assignment-policy.md
ADR-11: Rejected — Submission Bundle State Tracking
Decision: Do NOT implement a formal SubmissionBundle entity with state machine (INCOMPLETE → COMPLETE → LOCKED). Instead, derive completeness from slot requirements vs. uploaded files.
Rationale: The Codex plan proposed a SubmissionBundle model that tracked aggregate state across all file slots. This adds a model, a state machine, and synchronization logic (must update bundle state whenever any file changes). The simpler approach — checking required slots - uploaded files at query time — achieves the same result without the maintenance burden. The SubmissionWindow.lockOnClose flag handles the lock lifecycle.
Impact:
- No
SubmissionBundlemodel needed - Completeness is a computed property, not a stored state
SubmissionWindow+FileRequirement+ProjectFileare sufficient
ADR-12: Optional Purpose Keys for Analytics Grouping
Decision: Add an optional Round.purposeKey: String? field for analytics grouping rather than a new PurposeKey enum.
Rationale: Different programs may want to compare rounds across competitions (e.g., "all jury-1 selections across 2025 and 2026"). A purposeKey like "jury1_selection" enables this without making it a structural dependency. A free-text string is preferred over an enum because new analytics categories shouldn't require schema migrations.
Impact:
Round.purposeKeyis optional and has no runtime behavior- Used only for cross-competition analytics queries and reporting
- Convention: use snake_case descriptive keys (e.g., "jury1_selection", "semifinal_docs", "live_finals")
Governance Policy
No Silent Contract Drift
Any change to the following after Phase 0 (Contract Freeze) requires explicit architecture review:
- Prisma model additions or field changes
- Zod config schema modifications
- RoundType enum changes
- tRPC procedure signature changes
- Assignment policy behavior changes
Changes are not blocked — they require a documented decision with rationale, impact analysis, and sign-off from the architecture owner. See 13-open-questions-and-governance.md for governance process.