MOPC-App/docs/architecture/ai-configuration.md

4.6 KiB

AI Configuration Guide

Admin Settings

Navigate to Settings → AI to configure AI features.

Available Settings

Setting Description Default
ai_enabled Master switch for AI features true
ai_provider AI provider (OpenAI only currently) openai
ai_model Model to use gpt-4o
openai_api_key API key (encrypted) -
ai_send_descriptions Include project descriptions true

Supported Models

Standard Models (GPT)

Model Speed Quality Cost Recommended For
gpt-4o Fast Best Medium Production use
gpt-4o-mini Very Fast Good Low High-volume, cost-sensitive
gpt-4-turbo Medium Very Good High Complex analysis
gpt-3.5-turbo Very Fast Basic Very Low Simple tasks only

Reasoning Models (o-series)

Model Speed Quality Cost Recommended For
o1 Slow Excellent Very High Complex reasoning tasks
o1-mini Medium Very Good High Moderate complexity
o3-mini Medium Good Medium Cost-effective reasoning

Note: Reasoning models use different API parameters:

  • max_completion_tokens instead of max_tokens
  • No temperature parameter
  • No response_format: json_object
  • System messages become "developer" role

The platform automatically handles these differences via buildCompletionParams().

Cost Estimates

Per 1M Tokens (USD)

Model Input Output
gpt-4o $2.50 $10.00
gpt-4o-mini $0.15 $0.60
gpt-4-turbo $10.00 $30.00
gpt-3.5-turbo $0.50 $1.50
o1 $15.00 $60.00
o1-mini $3.00 $12.00
o3-mini $1.10 $4.40

Typical Usage Per Operation

Operation Projects Est. Tokens Est. Cost (gpt-4o)
Filter 100 projects 100 ~10,000 ~$0.10
Assign 50 projects 50 ~15,000 ~$0.15
Award eligibility 100 ~10,000 ~$0.10
Mentor matching 60 ~12,000 ~$0.12

Rate Limits

OpenAI enforces rate limits based on your account tier:

Tier Requests/Min Tokens/Min
Tier 1 500 30,000
Tier 2 5,000 450,000
Tier 3+ Higher Higher

The platform handles rate limits with:

  • Batch processing (reduces request count)
  • Error classification (detects rate limit errors)
  • Manual retry guidance in UI

Environment Variables

# Required for AI features
OPENAI_API_KEY=sk-your-api-key

# Optional overrides (normally set via admin UI)
OPENAI_MODEL=gpt-4o

Testing Connection

  1. Go to Settings → AI
  2. Enter your OpenAI API key
  3. Click Save AI Settings
  4. Click Test Connection

The test verifies:

  • API key validity
  • Model availability
  • Basic request/response

Monitoring Usage

Admin Dashboard

Navigate to Settings → AI to see:

  • Current month cost
  • Token usage by feature
  • Usage by model
  • 30-day usage trend

Database Queries

-- Current month usage
SELECT
  action,
  SUM(total_tokens) as tokens,
  SUM(estimated_cost_usd) as cost
FROM ai_usage_log
WHERE created_at >= date_trunc('month', NOW())
GROUP BY action;

-- Top users by cost
SELECT
  u.email,
  SUM(l.estimated_cost_usd) as total_cost
FROM ai_usage_log l
JOIN users u ON l.user_id = u.id
GROUP BY u.id
ORDER BY total_cost DESC
LIMIT 10;

Troubleshooting

"Model not found"

  • Verify the model is available with your API key tier
  • Some models (o1, o3) require specific API access
  • Try a more common model like gpt-4o-mini

"Rate limit exceeded"

  • Wait a few minutes before retrying
  • Consider using a smaller batch size
  • Upgrade your OpenAI account tier

"All projects flagged"

  1. Check Settings → AI for correct API key
  2. Verify model is available
  3. Check console logs for specific error messages
  4. Test connection with the button in settings

"Invalid API key"

  1. Verify the key starts with sk-
  2. Check the key hasn't been revoked in OpenAI dashboard
  3. Ensure no extra whitespace in the key

Best Practices

  1. Use gpt-4o-mini for high-volume operations (filtering many projects)
  2. Use gpt-4o for critical decisions (final assignments)
  3. Monitor costs regularly via the usage dashboard
  4. Test with small batches before running on full dataset
  5. Keep descriptions enabled for better matching accuracy

See Also