699 lines
29 KiB
Markdown
699 lines
29 KiB
Markdown
|
|
# Round: Evaluation (Jury 1 & Jury 2)
|
|||
|
|
|
|||
|
|
## 1. Purpose & Position in Flow
|
|||
|
|
|
|||
|
|
The EVALUATION round is the core judging mechanism of the competition. It appears **twice** in the standard flow:
|
|||
|
|
|
|||
|
|
| Instance | Name | Position | Jury | Purpose | Output |
|
|||
|
|
|----------|------|----------|------|---------|--------|
|
|||
|
|
| Round 3 | "Jury 1 — Semi-finalist Selection" | After FILTERING | Jury 1 | Score projects, select semi-finalists | Semi-finalists per category |
|
|||
|
|
| Round 5 | "Jury 2 — Finalist Selection" | After SUBMISSION Round 2 | Jury 2 | Score semi-finalists, select finalists + awards | Finalists per category |
|
|||
|
|
|
|||
|
|
Both instances use the same `RoundType.EVALUATION` but are configured independently with:
|
|||
|
|
- Different jury groups (Jury 1 vs Jury 2)
|
|||
|
|
- Different evaluation forms/rubrics
|
|||
|
|
- Different visible submission windows (Jury 1 sees Window 1 only; Jury 2 sees Windows 1+2)
|
|||
|
|
- Different advancement counts
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 2. Data Model
|
|||
|
|
|
|||
|
|
### Round Record
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Round {
|
|||
|
|
id: "round-jury-1"
|
|||
|
|
competitionId: "comp-2026"
|
|||
|
|
name: "Jury 1 — Semi-finalist Selection"
|
|||
|
|
roundType: EVALUATION
|
|||
|
|
status: ROUND_DRAFT → ROUND_ACTIVE → ROUND_CLOSED
|
|||
|
|
sortOrder: 2
|
|||
|
|
windowOpenAt: "2026-04-01" // Evaluation window start
|
|||
|
|
windowCloseAt: "2026-04-30" // Evaluation window end
|
|||
|
|
juryGroupId: "jury-group-1" // Links to Jury 1
|
|||
|
|
submissionWindowId: null // EVALUATION rounds don't collect submissions
|
|||
|
|
configJson: { ...EvaluationConfig }
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### EvaluationConfig
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
type EvaluationConfig = {
|
|||
|
|
// --- Assignment Settings ---
|
|||
|
|
requiredReviewsPerProject: number // How many jurors review each project (default: 3)
|
|||
|
|
|
|||
|
|
// --- Scoring Mode ---
|
|||
|
|
scoringMode: "criteria" | "global" | "binary"
|
|||
|
|
// criteria: Score per criterion + weighted total
|
|||
|
|
// global: Single 1-10 score
|
|||
|
|
// binary: Yes/No decision (semi-finalist worthy?)
|
|||
|
|
requireFeedback: boolean // Must provide text feedback (default: true)
|
|||
|
|
|
|||
|
|
// --- COI ---
|
|||
|
|
coiRequired: boolean // Must declare COI before evaluating (default: true)
|
|||
|
|
|
|||
|
|
// --- Peer Review ---
|
|||
|
|
peerReviewEnabled: boolean // Jurors can see anonymized peer evaluations after submission
|
|||
|
|
anonymizationLevel: "fully_anonymous" | "show_initials" | "named"
|
|||
|
|
|
|||
|
|
// --- AI Features ---
|
|||
|
|
aiSummaryEnabled: boolean // Generate AI-powered evaluation summaries
|
|||
|
|
aiAssignmentEnabled: boolean // Allow AI-suggested jury-project matching
|
|||
|
|
|
|||
|
|
// --- Advancement ---
|
|||
|
|
advancementMode: "auto_top_n" | "admin_selection" | "ai_recommended"
|
|||
|
|
advancementConfig: {
|
|||
|
|
perCategory: boolean // Separate counts per STARTUP / BUSINESS_CONCEPT
|
|||
|
|
startupCount: number // How many startups advance (default: 10 for Jury 1, 3 for Jury 2)
|
|||
|
|
conceptCount: number // How many concepts advance
|
|||
|
|
tieBreaker: "admin_decides" | "highest_individual" | "revote"
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### Related Models
|
|||
|
|
|
|||
|
|
| Model | Role |
|
|||
|
|
|-------|------|
|
|||
|
|
| `JuryGroup` | Named jury entity linked to this round |
|
|||
|
|
| `JuryGroupMember` | Members of the jury with per-juror overrides |
|
|||
|
|
| `Assignment` | Juror-project pairing for this round, linked to JuryGroup |
|
|||
|
|
| `Evaluation` | Score/feedback submitted by a juror for one project |
|
|||
|
|
| `EvaluationForm` | Rubric/criteria definition for this round |
|
|||
|
|
| `ConflictOfInterest` | COI declaration per assignment |
|
|||
|
|
| `GracePeriod` | Per-juror deadline extension |
|
|||
|
|
| `EvaluationSummary` | AI-generated insights per project per round |
|
|||
|
|
| `EvaluationDiscussion` | Peer review discussion threads |
|
|||
|
|
| `RoundSubmissionVisibility` | Which submission windows' docs jury can see |
|
|||
|
|
| `AdvancementRule` | How projects advance after evaluation |
|
|||
|
|
| `ProjectRoundState` | Per-project state in this round |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 3. Setup Phase (Before Window Opens)
|
|||
|
|
|
|||
|
|
### 3.1 Admin Creates the Evaluation Round
|
|||
|
|
|
|||
|
|
Admin uses the competition wizard or round management UI to:
|
|||
|
|
|
|||
|
|
1. **Create the Round** with type EVALUATION
|
|||
|
|
2. **Link a JuryGroup** — select "Jury 1" (or create a new jury group)
|
|||
|
|
3. **Set the evaluation window** — start and end dates
|
|||
|
|
4. **Configure the evaluation form** — scoring criteria, weights, scales
|
|||
|
|
5. **Set visibility** — which submission windows jury can see (via RoundSubmissionVisibility)
|
|||
|
|
6. **Configure advancement rules** — how many advance per category
|
|||
|
|
|
|||
|
|
### 3.2 Jury Group Configuration
|
|||
|
|
|
|||
|
|
The linked JuryGroup has:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
JuryGroup {
|
|||
|
|
name: "Jury 1"
|
|||
|
|
defaultMaxAssignments: 20 // Default cap per juror
|
|||
|
|
defaultCapMode: SOFT // HARD | SOFT | NONE
|
|||
|
|
softCapBuffer: 2 // Can exceed by 2 for load balancing
|
|||
|
|
categoryQuotasEnabled: true
|
|||
|
|
defaultCategoryQuotas: {
|
|||
|
|
"STARTUP": { "min": 3, "max": 15 },
|
|||
|
|
"BUSINESS_CONCEPT": { "min": 3, "max": 15 }
|
|||
|
|
}
|
|||
|
|
allowJurorCapAdjustment: true // Jurors can adjust their cap during onboarding
|
|||
|
|
allowJurorRatioAdjustment: true // Jurors can adjust their category preference
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3.3 Per-Juror Overrides
|
|||
|
|
|
|||
|
|
Each `JuryGroupMember` can override group defaults:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
JuryGroupMember {
|
|||
|
|
juryGroupId: "jury-group-1"
|
|||
|
|
userId: "judge-alice"
|
|||
|
|
maxAssignmentsOverride: 25 // Alice wants more projects
|
|||
|
|
capModeOverride: HARD // Alice: hard cap, no exceptions
|
|||
|
|
categoryQuotasOverride: {
|
|||
|
|
"STARTUP": { "min": 5, "max": 20 }, // Alice prefers startups
|
|||
|
|
"BUSINESS_CONCEPT": { "min": 0, "max": 5 }
|
|||
|
|
}
|
|||
|
|
preferredStartupRatio: 0.8 // 80% startups
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3.4 Juror Onboarding (Optional)
|
|||
|
|
|
|||
|
|
If `allowJurorCapAdjustment` or `allowJurorRatioAdjustment` is true:
|
|||
|
|
|
|||
|
|
1. When a juror first opens their jury dashboard after being added to the group
|
|||
|
|
2. A one-time onboarding dialog appears:
|
|||
|
|
- "Your default maximum is 20 projects. Would you like to adjust?" (slider)
|
|||
|
|
- "Your default startup/concept ratio is 50/50. Would you like to adjust?" (slider)
|
|||
|
|
3. Juror saves preferences → stored in `JuryGroupMember.maxAssignmentsOverride` and `preferredStartupRatio`
|
|||
|
|
4. Dialog doesn't appear again (tracked via `JuryGroupMember.updatedAt` or a flag)
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 4. Assignment System (Enhanced)
|
|||
|
|
|
|||
|
|
### 4.1 Assignment Algorithm — Jury-Group-Aware
|
|||
|
|
|
|||
|
|
The current `stage-assignment.ts` algorithm is enhanced to:
|
|||
|
|
|
|||
|
|
1. **Filter jury pool by JuryGroup** — only members of the linked jury group are considered
|
|||
|
|
2. **Apply hard/soft cap logic** per juror
|
|||
|
|
3. **Apply category quotas** per juror
|
|||
|
|
4. **Score candidates** using existing expertise matching + workload balancing + geo-diversity
|
|||
|
|
|
|||
|
|
#### Effective Limits Resolution
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
function getEffectiveLimits(member: JuryGroupMember, group: JuryGroup): EffectiveLimits {
|
|||
|
|
return {
|
|||
|
|
maxAssignments: member.maxAssignmentsOverride ?? group.defaultMaxAssignments,
|
|||
|
|
capMode: member.capModeOverride ?? group.defaultCapMode,
|
|||
|
|
softCapBuffer: group.softCapBuffer, // Group-level only (not per-juror)
|
|||
|
|
categoryQuotas: member.categoryQuotasOverride ?? group.defaultCategoryQuotas,
|
|||
|
|
categoryQuotasEnabled: group.categoryQuotasEnabled,
|
|||
|
|
preferredStartupRatio: member.preferredStartupRatio,
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Cap Enforcement Logic
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
function canAssignMore(
|
|||
|
|
jurorId: string,
|
|||
|
|
projectCategory: CompetitionCategory,
|
|||
|
|
currentLoad: LoadTracker,
|
|||
|
|
limits: EffectiveLimits
|
|||
|
|
): { allowed: boolean; penalty: number; reason?: string } {
|
|||
|
|
const total = currentLoad.total(jurorId)
|
|||
|
|
const catLoad = currentLoad.byCategory(jurorId, projectCategory)
|
|||
|
|
|
|||
|
|
// 1. HARD cap check
|
|||
|
|
if (limits.capMode === "HARD" && total >= limits.maxAssignments) {
|
|||
|
|
return { allowed: false, penalty: 0, reason: "Hard cap reached" }
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 2. SOFT cap check (can exceed by buffer)
|
|||
|
|
let overflowPenalty = 0
|
|||
|
|
if (limits.capMode === "SOFT") {
|
|||
|
|
if (total >= limits.maxAssignments + limits.softCapBuffer) {
|
|||
|
|
return { allowed: false, penalty: 0, reason: "Soft cap + buffer exceeded" }
|
|||
|
|
}
|
|||
|
|
if (total >= limits.maxAssignments) {
|
|||
|
|
// In buffer zone — apply increasing penalty
|
|||
|
|
overflowPenalty = (total - limits.maxAssignments + 1) * 15
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 3. Category quota check
|
|||
|
|
if (limits.categoryQuotasEnabled && limits.categoryQuotas) {
|
|||
|
|
const quota = limits.categoryQuotas[projectCategory]
|
|||
|
|
if (quota) {
|
|||
|
|
if (catLoad >= quota.max) {
|
|||
|
|
return { allowed: false, penalty: 0, reason: `Category ${projectCategory} max reached (${quota.max})` }
|
|||
|
|
}
|
|||
|
|
// Bonus for under-min
|
|||
|
|
if (catLoad < quota.min) {
|
|||
|
|
overflowPenalty -= 15 // Negative penalty = bonus
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
// 4. Ratio preference alignment
|
|||
|
|
if (limits.preferredStartupRatio != null && total > 0) {
|
|||
|
|
const currentStartupRatio = currentLoad.byCategory(jurorId, "STARTUP") / total
|
|||
|
|
const isStartup = projectCategory === "STARTUP"
|
|||
|
|
const wantMore = isStartup
|
|||
|
|
? currentStartupRatio < limits.preferredStartupRatio
|
|||
|
|
: currentStartupRatio > limits.preferredStartupRatio
|
|||
|
|
if (wantMore) overflowPenalty -= 10 // Bonus for aligning with preference
|
|||
|
|
else overflowPenalty += 10 // Penalty for diverging
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
return { allowed: true, penalty: overflowPenalty }
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4.2 Assignment Flow
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
1. Admin opens Assignment panel for Round 3 (Jury 1)
|
|||
|
|
2. System loads:
|
|||
|
|
- Projects with ProjectRoundState PENDING/IN_PROGRESS in this round
|
|||
|
|
- JuryGroup members (with effective limits)
|
|||
|
|
- Existing assignments (to avoid duplicates)
|
|||
|
|
- COI records (to skip conflicted pairs)
|
|||
|
|
3. Admin clicks "Generate Suggestions"
|
|||
|
|
4. Algorithm runs:
|
|||
|
|
a. For each project (sorted by fewest current assignments):
|
|||
|
|
- Score each eligible juror (tag matching + workload + geo + cap/quota penalties)
|
|||
|
|
- Select top N jurors (N = requiredReviewsPerProject - existing reviews)
|
|||
|
|
- Track load in jurorLoadMap
|
|||
|
|
b. Report unassigned projects (jurors at capacity)
|
|||
|
|
5. Admin reviews preview:
|
|||
|
|
- Assignment matrix (juror × project grid)
|
|||
|
|
- Load distribution chart
|
|||
|
|
- Unassigned projects list
|
|||
|
|
- Category distribution per juror
|
|||
|
|
6. Admin can:
|
|||
|
|
- Accept all suggestions
|
|||
|
|
- Modify individual assignments (drag-drop or manual add/remove)
|
|||
|
|
- Re-run with different parameters
|
|||
|
|
7. Admin clicks "Apply Assignments"
|
|||
|
|
8. System creates Assignment records with juryGroupId set
|
|||
|
|
9. Notifications sent to jurors
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 4.3 AI-Powered Assignment (Optional)
|
|||
|
|
|
|||
|
|
If `aiAssignmentEnabled` is true in config:
|
|||
|
|
|
|||
|
|
1. Admin clicks "AI Assignment Suggestions"
|
|||
|
|
2. System calls `ai-assignment.ts`:
|
|||
|
|
- Anonymizes juror profiles and project descriptions
|
|||
|
|
- Sends to GPT with matching instructions
|
|||
|
|
- Returns confidence scores and reasoning
|
|||
|
|
3. AI suggestions shown alongside algorithm suggestions
|
|||
|
|
4. Admin picks which to use or mixes both
|
|||
|
|
|
|||
|
|
### 4.4 Handling Unassigned Projects
|
|||
|
|
|
|||
|
|
When all jurors with SOFT cap reach cap+buffer:
|
|||
|
|
1. Remaining projects become "unassigned"
|
|||
|
|
2. Admin dashboard highlights these prominently
|
|||
|
|
3. Admin can:
|
|||
|
|
- Manually assign to specific jurors (bypasses cap — manual override)
|
|||
|
|
- Increase a juror's cap
|
|||
|
|
- Add more jurors to the jury group
|
|||
|
|
- Reduce `requiredReviewsPerProject` for remaining projects
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 5. Jury Evaluation Experience
|
|||
|
|
|
|||
|
|
### 5.1 Jury Dashboard
|
|||
|
|
|
|||
|
|
When a Jury 1 member opens their dashboard:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
┌─────────────────────────────────────────────────────┐
|
|||
|
|
│ JURY 1 — Semi-finalist Selection │
|
|||
|
|
│ ─────────────────────────────────────────────────── │
|
|||
|
|
│ Evaluation Window: April 1 – April 30 │
|
|||
|
|
│ ⏱ 12 days remaining │
|
|||
|
|
│ │
|
|||
|
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ │
|
|||
|
|
│ │ 15 │ │ 8 │ │ 2 │ │ 5 │ │
|
|||
|
|
│ │ Total │ │ Complete │ │ In Draft │ │ Pending│ │
|
|||
|
|
│ └──────────┘ └──────────┘ └──────────┘ └────────┘ │
|
|||
|
|
│ │
|
|||
|
|
│ [Continue Next Evaluation →] │
|
|||
|
|
│ │
|
|||
|
|
│ Recent Assignments │
|
|||
|
|
│ ┌──────────────────────────────────────────────┐ │
|
|||
|
|
│ │ OceanClean AI │ Startup │ ✅ Done │ View │ │
|
|||
|
|
│ │ Blue Carbon Hub │ Concept │ ⏳ Draft │ Cont │ │
|
|||
|
|
│ │ SeaWatch Monitor │ Startup │ ⬜ Pending│ Start│ │
|
|||
|
|
│ │ ... │ │
|
|||
|
|
│ └──────────────────────────────────────────────┘ │
|
|||
|
|
└─────────────────────────────────────────────────────┘
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Key elements:
|
|||
|
|
- **Deadline countdown** — prominent timer showing days/hours remaining
|
|||
|
|
- **Progress stats** — total, completed, in-draft, pending
|
|||
|
|
- **Quick action CTA** — jump to next unevaluated project
|
|||
|
|
- **Assignment list** — sorted by status (pending first, then drafts, then done)
|
|||
|
|
|
|||
|
|
### 5.2 COI Declaration (Blocking)
|
|||
|
|
|
|||
|
|
Before evaluating any project, the juror MUST declare COI:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
┌───────────────────────────────────────────┐
|
|||
|
|
│ Conflict of Interest Declaration │
|
|||
|
|
│ │
|
|||
|
|
│ Do you have a conflict of interest with │
|
|||
|
|
│ "OceanClean AI" (Startup)? │
|
|||
|
|
│ │
|
|||
|
|
│ ○ No conflict — I can evaluate fairly │
|
|||
|
|
│ ○ Yes, I have a conflict: │
|
|||
|
|
│ Type: [Financial ▾] │
|
|||
|
|
│ Description: [________________] │
|
|||
|
|
│ │
|
|||
|
|
│ [Submit Declaration] │
|
|||
|
|
└───────────────────────────────────────────┘
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
- If **No conflict**: Proceed to evaluation form
|
|||
|
|
- If **Yes**: Assignment flagged, admin notified, juror may be reassigned
|
|||
|
|
- COI declaration is logged in `ConflictOfInterest` model
|
|||
|
|
- Admin can review and take action (cleared / reassigned / noted)
|
|||
|
|
|
|||
|
|
### 5.3 Evaluation Form
|
|||
|
|
|
|||
|
|
The form adapts to the `scoringMode`:
|
|||
|
|
|
|||
|
|
#### Criteria Mode (default for Jury 1 and Jury 2)
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
┌───────────────────────────────────────────────────┐
|
|||
|
|
│ Evaluating: OceanClean AI (Startup) │
|
|||
|
|
│ ──────────────────────────────────────────────── │
|
|||
|
|
│ │
|
|||
|
|
│ [📄 Documents] [📊 Scoring] [💬 Feedback] │
|
|||
|
|
│ │
|
|||
|
|
│ ── DOCUMENTS TAB ── │
|
|||
|
|
│ ┌─ Round 1 Application Docs ─────────────────┐ │
|
|||
|
|
│ │ 📄 Executive Summary.pdf [Download] │ │
|
|||
|
|
│ │ 📄 Business Plan.pdf [Download] │ │
|
|||
|
|
│ └─────────────────────────────────────────────┘ │
|
|||
|
|
│ │
|
|||
|
|
│ (Jury 2 also sees:) │
|
|||
|
|
│ ┌─ Round 2 Semi-finalist Docs ────────────────┐ │
|
|||
|
|
│ │ 📄 Updated Business Plan.pdf [Download] │ │
|
|||
|
|
│ │ 🎥 Video Pitch.mp4 [Play] │ │
|
|||
|
|
│ └─────────────────────────────────────────────┘ │
|
|||
|
|
│ │
|
|||
|
|
│ ── SCORING TAB ── │
|
|||
|
|
│ Innovation & Impact [1] [2] [3] [4] [5] (w:30%)│
|
|||
|
|
│ Feasibility [1] [2] [3] [4] [5] (w:25%)│
|
|||
|
|
│ Team & Execution [1] [2] [3] [4] [5] (w:25%)│
|
|||
|
|
│ Ocean Relevance [1] [2] [3] [4] [5] (w:20%)│
|
|||
|
|
│ │
|
|||
|
|
│ Overall Score: 3.8 / 5.0 (auto-calculated) │
|
|||
|
|
│ │
|
|||
|
|
│ ── FEEDBACK TAB ── │
|
|||
|
|
│ Feedback: [________________________________] │
|
|||
|
|
│ │
|
|||
|
|
│ [💾 Save Draft] [✅ Submit Evaluation] │
|
|||
|
|
│ (Auto-saves every 30s) │
|
|||
|
|
└───────────────────────────────────────────────────┘
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Binary Mode (optional for quick screening)
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Should this project advance to the semi-finals?
|
|||
|
|
[✅ Yes] [❌ No]
|
|||
|
|
|
|||
|
|
Justification (required): [________________]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### Global Score Mode
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Overall Score: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
|
|||
|
|
|
|||
|
|
Feedback (required): [________________]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 5.4 Document Visibility (Cross-Round)
|
|||
|
|
|
|||
|
|
Controlled by `RoundSubmissionVisibility`:
|
|||
|
|
|
|||
|
|
| Round | Sees Window 1 ("Application Docs") | Sees Window 2 ("Semi-finalist Docs") |
|
|||
|
|
|-------|------------------------------------|-----------------------------------------|
|
|||
|
|
| Jury 1 (Round 3) | Yes | No (doesn't exist yet) |
|
|||
|
|
| Jury 2 (Round 5) | Yes | Yes |
|
|||
|
|
| Jury 3 (Round 7) | Yes | Yes |
|
|||
|
|
|
|||
|
|
In the evaluation UI:
|
|||
|
|
- Documents are grouped by submission window
|
|||
|
|
- Each group has a label (from `RoundSubmissionVisibility.displayLabel`)
|
|||
|
|
- Clear visual separation (tabs, accordion sections, or side panels)
|
|||
|
|
|
|||
|
|
### 5.5 Auto-Save and Submission
|
|||
|
|
|
|||
|
|
- **Auto-save**: Client debounces and calls `evaluation.autosave` every 30 seconds while draft is open
|
|||
|
|
- **Draft status**: Evaluation starts as NOT_STARTED → DRAFT on first save → SUBMITTED on explicit submit
|
|||
|
|
- **Submission validation**:
|
|||
|
|
- All required criteria scored (if criteria mode)
|
|||
|
|
- Global score provided (if global mode)
|
|||
|
|
- Binary decision selected (if binary mode)
|
|||
|
|
- Feedback text provided (if `requireFeedback`)
|
|||
|
|
- Window is open (or juror has grace period)
|
|||
|
|
- **After submission**: Evaluation becomes read-only for juror (status = SUBMITTED)
|
|||
|
|
- **Admin can lock**: Set status to LOCKED to prevent any further changes
|
|||
|
|
|
|||
|
|
### 5.6 Grace Periods
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
GracePeriod {
|
|||
|
|
roundId: "round-jury-1"
|
|||
|
|
userId: "judge-alice"
|
|||
|
|
projectId: null // Applies to ALL Alice's assignments in this round
|
|||
|
|
extendedUntil: "2026-05-02" // 2 days after official close
|
|||
|
|
reason: "Travel conflict"
|
|||
|
|
grantedById: "admin-1"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
- Admin can grant per-juror or per-juror-per-project grace periods
|
|||
|
|
- Evaluation submission checks grace period before rejecting past-window submissions
|
|||
|
|
- Dashboard shows "(Grace period: 2 extra days)" badge for affected jurors
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 6. End of Evaluation — Results & Advancement
|
|||
|
|
|
|||
|
|
### 6.1 Results Visualization
|
|||
|
|
|
|||
|
|
When the evaluation window closes, the admin sees:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
┌──────────────────────────────────────────────────────────────┐
|
|||
|
|
│ Jury 1 Results │
|
|||
|
|
│ ─────────────────────────────────────────────────────────── │
|
|||
|
|
│ │
|
|||
|
|
│ Completion: 142/150 evaluations submitted (94.7%) │
|
|||
|
|
│ Outstanding: 8 (3 jurors have pending evaluations) │
|
|||
|
|
│ │
|
|||
|
|
│ ┌─ STARTUPS (Top 10) ──────────────────────────────────────┐│
|
|||
|
|
│ │ # Project Avg Score Consensus Reviews Status ││
|
|||
|
|
│ │ 1 OceanClean AI 4.6/5 0.92 3/3 ✅ ││
|
|||
|
|
│ │ 2 SeaWatch 4.3/5 0.85 3/3 ✅ ││
|
|||
|
|
│ │ 3 BlueCarbon 4.1/5 0.78 3/3 ✅ ││
|
|||
|
|
│ │ ... ││
|
|||
|
|
│ │ 10 TidalEnergy 3.2/5 0.65 3/3 ✅ ││
|
|||
|
|
│ │ ── cutoff line ────────────────────────────────────────── ││
|
|||
|
|
│ │ 11 WavePower 3.1/5 0.71 3/3 ⬜ ││
|
|||
|
|
│ │ 12 CoralGuard 2.9/5 0.55 2/3 ⚠️ ││
|
|||
|
|
│ └──────────────────────────────────────────────────────────┘│
|
|||
|
|
│ │
|
|||
|
|
│ ┌─ CONCEPTS (Top 10) ──────────────────────────────────────┐│
|
|||
|
|
│ │ (same layout) ││
|
|||
|
|
│ └──────────────────────────────────────────────────────────┘│
|
|||
|
|
│ │
|
|||
|
|
│ [🤖 AI Recommendation] [📊 Score Distribution] [Export] │
|
|||
|
|
│ │
|
|||
|
|
│ [✅ Approve Shortlist] [✏️ Edit Shortlist] │
|
|||
|
|
└──────────────────────────────────────────────────────────────┘
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Metrics shown:**
|
|||
|
|
- Average global score (or weighted criteria average)
|
|||
|
|
- Consensus score (1 - normalized stddev, where 1.0 = full agreement)
|
|||
|
|
- Review count / required
|
|||
|
|
- Per-criterion averages (expandable)
|
|||
|
|
|
|||
|
|
### 6.2 AI Recommendation
|
|||
|
|
|
|||
|
|
When admin clicks "AI Recommendation":
|
|||
|
|
|
|||
|
|
1. System calls `ai-evaluation-summary.ts` for each project in bulk
|
|||
|
|
2. AI generates:
|
|||
|
|
- Ranked shortlist per category based on scores + feedback analysis
|
|||
|
|
- Strengths, weaknesses, themes per project
|
|||
|
|
- Recommendation: "Advance" / "Borderline" / "Do not advance"
|
|||
|
|
3. Admin sees AI recommendation alongside actual scores
|
|||
|
|
4. AI recommendations are suggestions only — admin has final say
|
|||
|
|
|
|||
|
|
### 6.3 Advancement Decision
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Advancement Mode: admin_selection (with AI recommendation)
|
|||
|
|
|
|||
|
|
1. System shows ranked list per category
|
|||
|
|
2. AI highlights recommended top N per category
|
|||
|
|
3. Admin can:
|
|||
|
|
- Accept AI recommendation
|
|||
|
|
- Drag projects to reorder
|
|||
|
|
- Add/remove projects from advancement list
|
|||
|
|
- Set custom cutoff line
|
|||
|
|
4. Admin clicks "Confirm Advancement"
|
|||
|
|
5. System:
|
|||
|
|
a. Sets ProjectRoundState to PASSED for advancing projects
|
|||
|
|
b. Sets ProjectRoundState to REJECTED for non-advancing projects
|
|||
|
|
c. Updates Project.status to SEMIFINALIST (Jury 1) or FINALIST (Jury 2)
|
|||
|
|
d. Logs all decisions in DecisionAuditLog
|
|||
|
|
e. Sends notifications to all teams (advanced / not selected)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 6.4 Advancement Modes
|
|||
|
|
|
|||
|
|
| Mode | Behavior |
|
|||
|
|
|------|----------|
|
|||
|
|
| `auto_top_n` | Top N per category automatically advance when window closes |
|
|||
|
|
| `admin_selection` | Admin manually selects who advances (with AI/score guidance) |
|
|||
|
|
| `ai_recommended` | AI proposes list, admin must approve/modify |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 7. Special Awards Integration (Jury 2 Only)
|
|||
|
|
|
|||
|
|
During the Jury 2 evaluation round, special awards can run alongside:
|
|||
|
|
|
|||
|
|
### 7.1 How It Works
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
Round 5: "Jury 2 — Finalist Selection"
|
|||
|
|
├── Main evaluation (all semi-finalists scored by Jury 2)
|
|||
|
|
└── Special Awards (run in parallel):
|
|||
|
|
├── "Innovation Award" — STAY_IN_MAIN mode
|
|||
|
|
│ Projects remain in main eval, flagged as eligible
|
|||
|
|
│ Award jury (subset of Jury 2 or separate) votes
|
|||
|
|
└── "Impact Award" — SEPARATE_POOL mode
|
|||
|
|
AI filters eligible projects into award pool
|
|||
|
|
Dedicated jury evaluates and votes
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 7.2 SpecialAward.evaluationRoundId
|
|||
|
|
|
|||
|
|
Each award links to the evaluation round it runs alongside:
|
|||
|
|
```
|
|||
|
|
SpecialAward {
|
|||
|
|
evaluationRoundId: "round-jury-2" // Runs during Jury 2
|
|||
|
|
eligibilityMode: STAY_IN_MAIN
|
|||
|
|
juryGroupId: "jury-group-innovation" // Can be same or different jury
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 7.3 Award Evaluation Flow
|
|||
|
|
|
|||
|
|
1. Before Jury 2 window opens: Admin runs award eligibility (AI or manual)
|
|||
|
|
2. During Jury 2 window: Award jury members see their award assignments alongside regular evaluations
|
|||
|
|
3. Award jury submits award votes (PICK_WINNER, RANKED, or SCORED)
|
|||
|
|
4. After Jury 2 closes: Award results finalized alongside main results
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 8. Differences Between Jury 1 and Jury 2
|
|||
|
|
|
|||
|
|
| Aspect | Jury 1 (Round 3) | Jury 2 (Round 5) |
|
|||
|
|
|--------|-------------------|-------------------|
|
|||
|
|
| Input projects | All eligible (post-filtering) | Semi-finalists only |
|
|||
|
|
| Visible docs | Window 1 only | Window 1 + Window 2 |
|
|||
|
|
| Output | Semi-finalists | Finalists |
|
|||
|
|
| Project.status update | → SEMIFINALIST | → FINALIST |
|
|||
|
|
| Special awards | No | Yes (alongside) |
|
|||
|
|
| Jury group | Jury 1 | Jury 2 (different members, possible overlap) |
|
|||
|
|
| Typical project count | 50-100+ | 10-20 |
|
|||
|
|
| Required reviews | 3 (more projects, less depth) | 3-5 (fewer projects, more depth) |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 9. API Changes
|
|||
|
|
|
|||
|
|
### Preserved Procedures (renamed stageId → roundId)
|
|||
|
|
|
|||
|
|
| Procedure | Change |
|
|||
|
|
|-----------|--------|
|
|||
|
|
| `evaluation.get` | roundId via assignment |
|
|||
|
|
| `evaluation.start` | No change |
|
|||
|
|
| `evaluation.autosave` | No change |
|
|||
|
|
| `evaluation.submit` | Window check uses round.windowCloseAt + grace periods |
|
|||
|
|
| `evaluation.declareCOI` | No change |
|
|||
|
|
| `evaluation.getCOIStatus` | No change |
|
|||
|
|
| `evaluation.getProjectStats` | No change |
|
|||
|
|
| `evaluation.listByRound` | Renamed from listByStage |
|
|||
|
|
| `evaluation.generateSummary` | roundId instead of stageId |
|
|||
|
|
| `evaluation.generateBulkSummaries` | roundId instead of stageId |
|
|||
|
|
|
|||
|
|
### New Procedures
|
|||
|
|
|
|||
|
|
| Procedure | Purpose |
|
|||
|
|
|-----------|---------|
|
|||
|
|
| `assignment.previewWithJuryGroup` | Preview assignments filtered by jury group with cap/quota logic |
|
|||
|
|
| `assignment.getJuryGroupStats` | Per-member stats: load, category distribution, cap utilization |
|
|||
|
|
| `evaluation.getResultsOverview` | Rankings, scores, consensus, AI recommendations per category |
|
|||
|
|
| `evaluation.confirmAdvancement` | Admin confirms which projects advance |
|
|||
|
|
| `evaluation.getAdvancementPreview` | Preview advancement impact before confirming |
|
|||
|
|
|
|||
|
|
### Modified Procedures
|
|||
|
|
|
|||
|
|
| Procedure | Modification |
|
|||
|
|
|-----------|-------------|
|
|||
|
|
| `assignment.getSuggestions` | Now filters by JuryGroup, applies hard/soft caps, category quotas |
|
|||
|
|
| `assignment.create` | Now sets `juryGroupId` on Assignment |
|
|||
|
|
| `assignment.bulkCreate` | Now validates against jury group caps |
|
|||
|
|
| `file.listByProjectForRound` | Uses RoundSubmissionVisibility to filter docs |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 10. Service Layer Changes
|
|||
|
|
|
|||
|
|
### `stage-assignment.ts` → `round-assignment.ts`
|
|||
|
|
|
|||
|
|
Key changes to `previewStageAssignment` → `previewRoundAssignment`:
|
|||
|
|
|
|||
|
|
1. **Load jury pool from JuryGroup** instead of all JURY_MEMBER users:
|
|||
|
|
```typescript
|
|||
|
|
const juryGroup = await prisma.juryGroup.findUnique({
|
|||
|
|
where: { id: round.juryGroupId },
|
|||
|
|
include: { members: { include: { user: true } } }
|
|||
|
|
})
|
|||
|
|
const jurors = juryGroup.members.map(m => ({
|
|||
|
|
...m.user,
|
|||
|
|
effectiveLimits: getEffectiveLimits(m, juryGroup),
|
|||
|
|
}))
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
2. **Replace simple max check** with cap mode logic (hard/soft/none)
|
|||
|
|
3. **Add category quota tracking** per juror
|
|||
|
|
4. **Add ratio preference scoring** in candidate ranking
|
|||
|
|
5. **Report overflow** — projects that couldn't be assigned because all jurors hit caps
|
|||
|
|
|
|||
|
|
### `stage-engine.ts` → `round-engine.ts`
|
|||
|
|
|
|||
|
|
Simplified:
|
|||
|
|
- Remove trackId from all transitions
|
|||
|
|
- `executeTransition` now takes `fromRoundId` + `toRoundId` (or auto-advance to next sortOrder)
|
|||
|
|
- `validateTransition` simplified — no StageTransition lookup, just checks next round exists and is active
|
|||
|
|
- Guard evaluation simplified — AdvancementRule.configJson replaces arbitrary guardJson
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## 11. Edge Cases
|
|||
|
|
|
|||
|
|
### More projects than jurors can handle
|
|||
|
|
- Algorithm assigns up to hard/soft cap for all jurors
|
|||
|
|
- Remaining projects flagged as "unassigned" in admin dashboard
|
|||
|
|
- Admin must: add jurors, increase caps, or manually assign
|
|||
|
|
|
|||
|
|
### Juror doesn't complete by deadline
|
|||
|
|
- Dashboard shows overdue assignments prominently
|
|||
|
|
- Admin can: extend via GracePeriod, reassign to another juror, or mark as incomplete
|
|||
|
|
|
|||
|
|
### Tie in scores at cutoff
|
|||
|
|
- Depending on `tieBreaker` config:
|
|||
|
|
- `admin_decides`: Admin manually picks from tied projects
|
|||
|
|
- `highest_individual`: Project with highest single-evaluator score wins
|
|||
|
|
- `revote`: Tied projects sent back for quick re-evaluation
|
|||
|
|
|
|||
|
|
### Category imbalance
|
|||
|
|
- If one category has far more projects, quotas ensure jurors still get a mix
|
|||
|
|
- If quotas can't be satisfied (not enough of one category), system relaxes quota for that category
|
|||
|
|
|
|||
|
|
### Juror in multiple jury groups
|
|||
|
|
- Juror Alice is in Jury 1 and Jury 2
|
|||
|
|
- Her assignments for each round are independent
|
|||
|
|
- Her caps are per-jury-group (20 for Jury 1, 15 for Jury 2)
|
|||
|
|
- No cross-round cap — each round manages its own workload
|