MOPC-App/docs/architecture/ai-system.md

7.5 KiB

MOPC AI System Architecture

Overview

The MOPC platform uses AI (OpenAI GPT models) for four core functions:

  1. Project Filtering - Automated eligibility screening against admin-defined criteria
  2. Jury Assignment - Smart juror-project matching based on expertise alignment
  3. Award Eligibility - Special award qualification determination
  4. Mentor Matching - Mentor-project recommendations based on expertise

System Architecture

┌─────────────────────────────────────────────────────────────────┐
│                      ADMIN INTERFACE                             │
│  (Rounds, Filtering, Awards, Assignments, Mentor Assignment)     │
└─────────────────────────┬───────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────┐
│                      tRPC ROUTERS                                │
│  filtering.ts │ assignment.ts │ specialAward.ts │ mentor.ts      │
└─────────────────────────┬───────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────┐
│                      AI SERVICES                                 │
│  ai-filtering.ts │ ai-assignment.ts │ ai-award-eligibility.ts   │
│                   │ mentor-matching.ts                           │
└─────────────────────────┬───────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────┐
│                 ANONYMIZATION LAYER                              │
│               anonymization.ts                                   │
│  - PII stripping    - ID replacement    - Text sanitization     │
└─────────────────────────┬───────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────┐
│                   OPENAI CLIENT                                  │
│               lib/openai.ts                                      │
│  - Model detection  - Parameter building  - Token tracking       │
└─────────────────────────┬───────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────┐
│                   OPENAI API                                     │
│  GPT-4o │ GPT-4o-mini │ o1 │ o3-mini (configurable)             │
└─────────────────────────────────────────────────────────────────┘

Data Flow

  1. Admin triggers AI action (filter projects, suggest assignments)
  2. Router validates permissions and fetches data from database
  3. AI Service prepares data for processing
  4. Anonymization Layer strips PII, replaces IDs, sanitizes text
  5. OpenAI Client builds request with correct parameters for model type
  6. Request sent to OpenAI API
  7. Response parsed and de-anonymized
  8. Results stored in database, usage logged
  9. UI updated with results

Key Components

OpenAI Client (lib/openai.ts)

Handles communication with OpenAI API:

  • getOpenAI() - Get configured OpenAI client
  • getConfiguredModel() - Get the admin-selected model
  • buildCompletionParams() - Build API parameters (handles reasoning vs standard models)
  • isReasoningModel() - Detect o1/o3/o4 series models

Anonymization Service (server/services/anonymization.ts)

GDPR-compliant data preparation:

  • anonymizeForAI() - Basic anonymization for assignment
  • anonymizeProjectsForAI() - Comprehensive project anonymization for filtering/awards
  • validateAnonymization() - Verify no PII in anonymized data
  • deanonymizeResults() - Map AI results back to real IDs

Token Tracking (server/utils/ai-usage.ts)

Cost and usage monitoring:

  • logAIUsage() - Log API calls to database
  • calculateCost() - Compute estimated cost by model
  • getAIUsageStats() - Retrieve usage statistics
  • getCurrentMonthCost() - Get current billing period totals

Error Handling (server/services/ai-errors.ts)

Unified error classification:

  • classifyAIError() - Categorize API errors
  • shouldRetry() - Determine if error is retryable
  • getUserFriendlyMessage() - Get human-readable error messages

Batching Strategy

All AI services process data in batches to avoid token limits:

Service Batch Size Reason
AI Assignment 15 projects Include all jurors per batch
AI Filtering 20 projects Balance throughput and cost
Award Eligibility 20 projects Consistent with filtering
Mentor Matching 15 projects All mentors per batch

Fallback Behavior

All AI services have algorithmic fallbacks when AI is unavailable:

  1. Assignment - Expertise tag matching + load balancing
  2. Filtering - Flag all projects for manual review
  3. Award Eligibility - Flag all for manual review
  4. Mentor Matching - Keyword-based matching algorithm

Security Considerations

  1. API keys stored encrypted in database
  2. No PII sent to OpenAI (enforced by anonymization)
  3. Audit logging of all AI operations
  4. Role-based access to AI features (admin only)

Files Reference

File Purpose
lib/openai.ts OpenAI client configuration
server/services/ai-filtering.ts Project filtering service
server/services/ai-assignment.ts Jury assignment service
server/services/ai-award-eligibility.ts Award eligibility service
server/services/mentor-matching.ts Mentor matching service
server/services/anonymization.ts Data anonymization
server/services/ai-errors.ts Error classification
server/utils/ai-usage.ts Token tracking

See Also