Expand GDPR documentation with comprehensive compliance details
Build and Push Docker Image / build (push) Successful in 8m39s Details

- Add complete definitions section (GDPR terms, AI-specific terms)
- Document Monaco Law 1.565 (Dec 2024) and new APDP authority
- List all joint controllers (IUM, Oceanographic Institute, etc.)
- Detail all personal data categories processed
- Document legal bases with Legitimate Interests Assessments
- Add complete data subject rights procedures
- Document server location (Austria, EU) and EU data residency for OpenAI
- Add security measures, encryption standards, backup procedures
- Include Data Protection Impact Assessments
- Add breach notification procedures with timelines
- Document OpenAI as subprocessor with DPA and ZDR details
- Add compliance checklists and audit procedures

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Matt 2026-02-03 12:22:15 +01:00
parent 928b1c65dc
commit fd82a9b981
2 changed files with 1714 additions and 389 deletions

View File

@ -1,217 +1,766 @@
# AI Data Processing - GDPR Compliance Documentation # AI Data Processing - GDPR Compliance Documentation
## Overview **Document Version:** 2.0
**Last Updated:** February 2026
**Classification:** Internal / Compliance
**Parent Document:** [Platform GDPR Compliance](./platform-gdpr-compliance.md)
This document describes how project data is processed by AI services in the MOPC Platform, ensuring compliance with GDPR Articles 5, 6, 13-14, 25, and 32. ---
## Legal Basis ## Table of Contents
| Processing Activity | Legal Basis | GDPR Article | 1. [Executive Summary](#1-executive-summary)
|---------------------|-------------|--------------| 2. [Definitions](#2-definitions)
| AI-powered project filtering | Legitimate interest | Art. 6(1)(f) | 3. [Legal Framework](#3-legal-framework)
| AI-powered jury assignment | Legitimate interest | Art. 6(1)(f) | 4. [AI Processing Activities](#4-ai-processing-activities)
| AI-powered award eligibility | Legitimate interest | Art. 6(1)(f) | 5. [Data Minimisation & Anonymisation](#5-data-minimisation--anonymisation)
| AI-powered mentor matching | Legitimate interest | Art. 6(1)(f) | 6. [Technical Implementation](#6-technical-implementation)
7. [Subprocessor: OpenAI](#7-subprocessor-openai)
8. [Data Subject Rights](#8-data-subject-rights)
9. [Risk Assessment](#9-risk-assessment)
10. [Audit & Monitoring](#10-audit--monitoring)
11. [Incident Response](#11-incident-response)
12. [Compliance Checklist](#12-compliance-checklist)
13. [Contact Information](#13-contact-information)
**Legitimate Interest Justification:** AI processing is used to efficiently evaluate ocean conservation projects and match appropriate reviewers, directly serving the platform's purpose of managing the Monaco Ocean Protection Challenge. ---
## Data Minimization (Article 5(1)(c)) ## 1. Executive Summary
The AI system applies strict data minimization: This document describes how the Monaco Ocean Protection Challenge (MOPC) Platform uses Artificial Intelligence (AI) services while maintaining strict compliance with the General Data Protection Regulation (GDPR) and Monaco Law 1.565 of December 3, 2024.
- **Only necessary fields** sent to AI (no names, emails, phone numbers) ### Key Compliance Measures
- **Descriptions truncated** to 300-500 characters maximum
- **Team size** sent as count only (no member details)
- **Dates** sent as year-only or ISO date (no timestamps)
- **IDs replaced** with sequential anonymous identifiers (P1, P2, etc.)
## Anonymization Measures | Measure | Implementation |
|---------|----------------|
| **Data Minimisation** | Only necessary, non-identifying data sent to AI |
| **Anonymisation** | All personal identifiers stripped before AI processing |
| **EU Data Residency** | AI processing occurs within EU (Ireland) |
| **Zero Data Retention** | AI provider does not store data at rest |
| **Human Oversight** | AI provides recommendations only; humans make final decisions |
| **Audit Trail** | All AI operations logged for accountability |
### Data NEVER Sent to AI ### Fundamental Principle
| Data Type | Reason | **No personal data is transmitted to AI services.** All data sent to OpenAI is fully anonymised, meaning it cannot be attributed to any identifiable natural person. Anonymised data is not considered personal data under GDPR.
|-----------|--------|
| Personal names | PII - identifying |
| Email addresses | PII - identifying |
| Phone numbers | PII - identifying |
| Physical addresses | PII - identifying |
| External URLs | Could identify individuals |
| Internal project/user IDs | Could be cross-referenced |
| Team member details | PII - identifying |
| Internal comments | May contain PII |
| File content | May contain PII |
### Data Sent to AI (Anonymized) ---
| Field | Type | Purpose | Anonymization | ## 2. Definitions
|-------|------|---------|---------------|
| project_id | String | Reference | Replaced with P1, P2, etc. |
| title | String | Spam detection | PII patterns removed |
| description | String | Criteria matching | Truncated, PII stripped |
| category | Enum | Filtering | As-is (no PII) |
| ocean_issue | Enum | Topic filtering | As-is (no PII) |
| country | String | Geographic eligibility | As-is (country name only) |
| region | String | Regional eligibility | As-is (zone name only) |
| institution | String | Student identification | As-is (institution name only) |
| tags | Array | Keyword matching | As-is (no PII expected) |
| founded_year | Number | Age filtering | Year only, not full date |
| team_size | Number | Team requirements | Count only |
| file_count | Number | Document checks | Count only |
| file_types | Array | File requirements | Type names only |
| wants_mentorship | Boolean | Mentorship filtering | As-is |
| submission_source | Enum | Source filtering | As-is |
| submitted_date | String | Deadline checks | Date only, no time |
## Technical Safeguards In addition to the definitions in the [Platform GDPR Compliance](./platform-gdpr-compliance.md) document, the following AI-specific definitions apply:
### PII Detection and Stripping | Term | Definition |
|------|------------|
| **Artificial Intelligence (AI)** | Computer systems capable of performing tasks that typically require human intelligence, such as pattern recognition, natural language understanding, and decision-making support. |
| **Large Language Model (LLM)** | A type of AI model trained on large amounts of text data to understand and generate human language. OpenAI's GPT models are examples of LLMs. |
| **AI Service** | A component of the Platform that uses AI to process data and provide recommendations or analysis. |
| **Anonymised Data** | Data that has been processed in such a way that the data subject is not or no longer identifiable. Under GDPR, anonymised data is not personal data. |
| **Pseudonymised Data** | Data processed so that it can no longer be attributed to a specific data subject without additional information kept separately. Unlike anonymised data, pseudonymised data is still personal data under GDPR. |
| **Token** | A unit of text processed by an LLM. Approximately 1 token = 4 characters in English. Token usage determines AI processing costs. |
| **Zero Data Retention (ZDR)** | A configuration where the AI provider does not store input or output data at rest after processing is complete. |
| **EU Data Residency** | A configuration ensuring that data is processed within the European Union and does not leave EU jurisdiction. |
| **Prompt** | The text input sent to an AI model, consisting of instructions and data to be processed. |
| **Completion** | The text output generated by an AI model in response to a prompt. |
```typescript ---
// Patterns detected and removed before AI processing
const PII_PATTERNS = { ## 3. Legal Framework
email: /[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g,
phone: /(\+?\d{1,3}[-.\s]?)?\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4}/g, ### 3.1 Legal Basis for AI Processing
url: /https?:\/\/[^\s]+/g,
ssn: /\d{3}-\d{2}-\d{4}/g, AI-assisted processing activities are conducted under the following legal bases:
ipv4: /\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b/g,
} | Activity | Legal Basis | GDPR Article | Justification |
|----------|-------------|--------------|---------------|
| AI Project Filtering | Legitimate Interests | Art. 6(1)(f) | Efficient evaluation of large application volumes |
| AI Jury Assignment | Legitimate Interests | Art. 6(1)(f) | Optimal matching of expertise to projects |
| AI Award Eligibility | Legitimate Interests | Art. 6(1)(f) | Consistent application of eligibility criteria |
| AI Mentor Matching | Legitimate Interests | Art. 6(1)(f) | Effective mentor-project pairing |
### 3.2 Legitimate Interests Assessment
A Legitimate Interests Assessment (LIA) has been conducted for AI processing:
#### Purpose
To efficiently process and evaluate competition applications using AI-assisted analysis and matching.
#### Legitimate Interest Identified
- **Organisational efficiency:** Processing 100+ projects manually is impractical
- **Consistency:** AI applies criteria uniformly across all applications
- **Expertise matching:** AI identifies optimal reviewer-project and mentor-project pairings
- **Cost-effectiveness:** Reduced administrative burden enables focus on substantive evaluation
#### Necessity
- AI processing is necessary to achieve these interests at scale
- No less intrusive means would achieve the same objectives efficiently
- Human review alone cannot process the volume within required timeframes
#### Balancing Test
- **Risk to data subjects:** Minimal to none - data is fully anonymised before AI processing
- **Reasonable expectations:** Participants expect efficient, fair evaluation processes
- **Relationship:** Direct relationship through competition participation
- **Safeguards in place:**
- Full anonymisation (not pseudonymisation)
- EU data residency
- Zero data retention at AI provider
- Human oversight of all AI recommendations
- Right to object and request manual processing
#### Conclusion
The legitimate interests of the organisation are not overridden by the interests, rights, or freedoms of the data subjects. Processing may proceed with the implemented safeguards.
### 3.3 Article 22 - Automated Decision-Making
**Statement:** The Platform's AI processing does **not** constitute automated decision-making as defined in Article 22 of the GDPR.
Article 22(1) states: *"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."*
**Why Article 22 does not apply:**
1. **Not solely automated:** All AI outputs are recommendations reviewed and approved by human administrators. No decision is made without human involvement.
2. **No legal effects:** AI recommendations do not directly produce legal effects on data subjects. Humans make the final decisions about project advancement, jury assignments, and award eligibility.
3. **No significant effects:** The interim recommendations produced by AI do not, by themselves, significantly affect data subjects. Only the final human decisions have such effects.
4. **Anonymised data:** The data processed by AI is anonymised, meaning Article 22 protections for personal data processing do not apply to the AI processing stage itself.
**Safeguards implemented regardless:**
- Human review of all AI recommendations before implementation
- Right to request explanation of AI-assisted decisions
- Right to request fully manual processing
- Audit logging of AI recommendations and human decisions
---
## 4. AI Processing Activities
### 4.1 Overview of AI Services
The Platform uses AI for four distinct processing activities:
| Service | Purpose | Input Data | Output |
|---------|---------|------------|--------|
| **Project Filtering** | Evaluate projects against admin-defined criteria | Anonymised project data | Pass/fail recommendations with confidence scores |
| **Jury Assignment** | Match jury expertise to project topics | Anonymised juror and project data | Assignment suggestions with match scores |
| **Award Eligibility** | Determine eligibility for special awards | Anonymised project data | Eligibility determinations with reasoning |
| **Mentor Matching** | Recommend mentors for projects | Anonymised mentor and project data | Ranked mentor recommendations |
### 4.2 AI Project Filtering
**Purpose:** Assist administrators in screening projects against specific criteria (e.g., "Projects must have ocean conservation focus", "Exclude projects without descriptions").
**Process:**
1. Administrator defines criteria in plain language
2. System anonymises project data (see Section 5)
3. Anonymised data sent to AI with criteria
4. AI returns recommendations with confidence scores
5. Administrator reviews and approves/modifies recommendations
6. Results applied to projects
**Human Oversight:** Administrator reviews all AI recommendations before application. Projects flagged by AI as "uncertain" require manual review.
### 4.3 AI Jury Assignment
**Purpose:** Suggest optimal juror-project pairings based on expertise alignment.
**Process:**
1. System anonymises juror expertise tags and project data
2. Anonymised data sent to AI with assignment constraints
3. AI returns suggested pairings with match scores and reasoning
4. Administrator reviews suggestions
5. Administrator approves, modifies, or rejects assignments
6. Approved assignments created in system
**Human Oversight:** All assignments require explicit administrator approval. AI suggestions can be partially accepted or entirely rejected.
### 4.4 AI Award Eligibility
**Purpose:** Assist in determining which projects meet special award criteria.
**Process:**
1. Award criteria defined (may include rule-based and AI-interpreted criteria)
2. System anonymises project data
3. Anonymised data sent to AI with criteria
4. AI returns eligibility determinations with reasoning
5. Administrator reviews determinations
6. Final eligibility set by administrator
**Human Oversight:** Administrator has final authority on all eligibility decisions. AI reasoning is transparent and reviewable.
### 4.5 AI Mentor Matching
**Purpose:** Recommend suitable mentors for selected projects based on expertise.
**Process:**
1. System anonymises mentor profiles and project data
2. Anonymised data sent to AI
3. AI returns ranked mentor recommendations with reasoning
4. Administrator reviews recommendations
5. Assignments made by administrator or offered to mentors
**Human Oversight:** Mentor assignments require administrator approval and mentor acceptance.
---
## 5. Data Minimisation & Anonymisation
### 5.1 Principles Applied
The Platform applies the following GDPR principles to AI processing:
| Principle | GDPR Article | Implementation |
|-----------|--------------|----------------|
| **Data Minimisation** | Art. 5(1)(c) | Only necessary fields sent; descriptions truncated |
| **Purpose Limitation** | Art. 5(1)(b) | Data used only for specific AI task |
| **Storage Limitation** | Art. 5(1)(e) | Zero data retention at AI provider |
| **Integrity & Confidentiality** | Art. 5(1)(f) | TLS encryption; anonymisation |
### 5.2 What is Sent to AI
The following anonymised data elements may be sent to AI services:
| Data Element | Anonymisation Method | Purpose |
|--------------|---------------------|---------|
| Project ID | Replaced with sequential ID (P1, P2, etc.) | Reference only |
| Project title | PII patterns removed | Content analysis |
| Project description | Truncated (300-500 chars), PII removed | Criteria matching |
| Competition category | Sent as-is (enum value) | Filtering criteria |
| Ocean issue | Sent as-is (enum value) | Topic matching |
| Country | Sent as-is (country name) | Geographic filtering |
| Region/Zone | Sent as-is (zone name) | Regional eligibility |
| Institution | Sent as-is (institution name) | Student project identification |
| Tags | Sent as-is (keywords) | Topic matching |
| Founded year | Year only (not full date) | Age-based filtering |
| Team size | Count only (no member details) | Team requirements |
| File count | Count only (no file content) | Document requirements |
| File types | Type names only | File requirement checks |
| Mentorship preference | Boolean flag | Mentorship filtering |
| Submission source | Enum value | Source filtering |
| Submission date | Date only (no time) | Deadline checks |
| Juror expertise tags | Sent as-is (keywords) | Expertise matching |
| Juror assignment count | Number only | Workload balancing |
### 5.3 What is NEVER Sent to AI
The following data elements are **never** transmitted to AI services:
| Data Element | Reason | Alternative |
|--------------|--------|-------------|
| **Personal names** | PII - directly identifying | N/A |
| **Email addresses** | PII - directly identifying | N/A |
| **Phone numbers** | PII - directly identifying | N/A |
| **Physical addresses** | PII - directly identifying | Country/region only |
| **Team member details** | PII - identifying individuals | Team size count only |
| **External URLs** | Could lead to identifying information | Removed |
| **Real database IDs** | Could be cross-referenced | Sequential anonymous IDs |
| **File contents** | May contain PII | File type and count only |
| **Internal comments** | May contain PII references | N/A |
| **Profile photos** | Biometric data | N/A |
| **IP addresses** | PII - indirectly identifying | N/A |
### 5.4 PII Detection and Removal
Before any data is sent to AI, the following patterns are detected and removed:
```
Email addresses: [a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}
Phone numbers: Various international formats
URLs: https?://[^\s]+
Social Security: \d{3}-\d{2}-\d{4}
IP addresses: \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
``` ```
### Validation Before Every AI Call Detected patterns are replaced with placeholders (e.g., `[email removed]`, `[url removed]`).
### 5.5 Anonymisation vs. Pseudonymisation
**Critical distinction:**
| Aspect | Pseudonymisation | Anonymisation (Our Approach) |
|--------|------------------|------------------------------|
| Definition | Data can be attributed to individual with additional info | Data cannot be attributed to any individual |
| GDPR Status | Still personal data | Not personal data |
| Example | User123 → Real user (with mapping) | P1, P2 → No mapping to individuals |
| Our implementation | ❌ Not used | ✅ Used |
The Platform uses **anonymisation**, not pseudonymisation. The sequential IDs (P1, P2) used in AI processing cannot be mapped back to individuals by the AI provider or any external party. The mapping exists only within the Platform's secure environment and is used solely to apply AI recommendations to the correct records.
### 5.6 Validation Before Transmission
Every data payload is validated before transmission to AI:
```typescript ```typescript
// GDPR compliance enforced before EVERY API call // Executed before EVERY AI API call
export function enforceGDPRCompliance(data: unknown[]): void { function enforceGDPRCompliance(data: unknown[]): void {
for (const item of data) { for (const item of data) {
const { valid, violations } = validateNoPersonalData(item) const { valid, violations } = validateNoPersonalData(item);
if (!valid) { if (!valid) {
throw new Error(`GDPR compliance check failed: ${violations.join(', ')}`) throw new Error(`GDPR compliance check failed: ${violations.join(', ')}`);
} }
} }
} }
``` ```
### ID Anonymization If validation fails, the AI operation is aborted and an error is logged. This provides defence-in-depth against accidental PII transmission.
Real IDs are never sent to AI. Instead: ---
- Projects: `cm1abc123...``P1`, `P2`, `P3`
- Jurors: `cm2def456...``juror_001`, `juror_002`
- Results mapped back using secure mapping tables
## Data Retention ## 6. Technical Implementation
| Data Type | Retention | Deletion Method | ### 6.1 Architecture Overview
|-----------|-----------|-----------------|
| AI usage logs | 12 months | Automatic deletion |
| Anonymized prompts | Not stored | Sent directly to API |
| AI responses | Not stored | Parsed and discarded |
**Note:** OpenAI does not retain API data for training (per their API Terms). API data is retained for up to 30 days for abuse monitoring, configurable to 0 days. ```
┌─────────────────────────────────────────────────────────────────┐
│ Platform (Austria, EU) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ AI Service │───▶│ Anonymiser │───▶│ Validator │ │
│ │ (filtering, │ │ (strip PII, │ │ (verify no │ │
│ │ assignment) │ │ replace IDs)│ │ PII remains)│ │
│ └──────────────┘ └──────────────┘ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ API Client │ │
│ │ (TLS 1.2+) │ │
│ └──────┬───────┘ │
└─────────────────────────────────────────────────┼───────────────┘
│ HTTPS (TLS 1.2+)
│ Anonymised data only
┌─────────────────────────────────────────────────────────────────┐
│ OpenAI (Dublin, Ireland, EU) │
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ GPT Model Processing │ │
│ │ │ │
│ │ • EU Data Residency enabled │ │
│ │ • Zero Data Retention (ZDR) │ │
│ │ • Data NOT used for training │ │
│ │ • AES-256 encryption at rest (during processing) │ │
│ │ • SOC 2 Type 2 compliant │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Subprocessor: OpenAI ### 6.2 Encryption
| Stage | Encryption | Standard |
|-------|------------|----------|
| Data at rest (Platform) | AES-256 | Database encryption |
| Data in transit | TLS 1.2+ | HTTPS to OpenAI API |
| Data at rest (OpenAI) | AES-256 | OpenAI infrastructure |
| Data at rest after processing | N/A | Zero Data Retention |
### 6.3 Batching Strategy
To optimise efficiency and reduce API calls, data is processed in batches:
| Service | Batch Size | Rationale |
|---------|------------|-----------|
| Project Filtering | 20 projects | Balance throughput and cost |
| Jury Assignment | 15 projects | Include all jurors per batch |
| Award Eligibility | 20 projects | Consistent with filtering |
| Mentor Matching | 15 projects | Include all mentors per batch |
Batching reduces the number of API calls and associated costs while maintaining processing efficiency.
### 6.4 Description Truncation
Project descriptions are truncated to limit data exposure and token consumption:
| Context | Limit | Rationale |
|---------|-------|-----------|
| Assignment | 300 characters | Sufficient for topic identification |
| Filtering | 500 characters | More context needed for criteria |
| Eligibility | 400 characters | Balanced approach |
| Mentor Matching | 350 characters | Focus on topic alignment |
Truncation reduces:
- Data exposure (less text transmitted)
- Processing costs (fewer tokens)
- Risk of PII in longer texts
---
## 7. Subprocessor: OpenAI
### 7.1 Subprocessor Details
| Field | Value |
|-------|-------|
| **Legal Entity** | OpenAI, Inc. |
| **EU Entity** | OpenAI Ireland Limited |
| **Registered Address** | 3180 18th Street, San Francisco, CA 94110, USA |
| **EU Processing Location** | Dublin, Ireland |
| **Role** | Data Processor (for anonymised data) |
| **Service Used** | OpenAI API (Chat Completions) |
### 7.2 Data Processing Agreement
| Aspect | Status |
|--------|--------|
| **DPA Available** | Yes - OpenAI Data Processing Addendum |
| **SCCs Included** | Yes - EU Standard Contractual Clauses |
| **Current Status** | Using standard API Terms; DPA execution recommended |
**Recommendation:** Execute the formal OpenAI Data Processing Addendum for enhanced contractual protection, even though only anonymised data is transmitted.
**Reference:** [OpenAI Data Processing Addendum](https://openai.com/policies/data-processing-addendum/)
### 7.3 EU Data Residency
OpenAI offers EU data residency for API customers:
| Feature | Details |
|---------|---------|
| **Processing Location** | Dublin, Ireland (EU) |
| **Configuration** | Per-project setting in OpenAI platform |
| **Data Flows** | Requests processed entirely within EU |
| **Availability** | Available for API Platform customers |
**Status:** EU data residency should be configured for all MOPC API projects.
**Reference:** [OpenAI EU Data Residency](https://openai.com/index/introducing-data-residency-in-europe/)
### 7.4 Zero Data Retention
| Aspect | Details | | Aspect | Details |
|--------|---------| |--------|---------|
| Subprocessor | OpenAI, Inc. | | **Default Retention** | 30 days (for abuse monitoring) |
| Location | United States | | **ZDR Option** | Available for eligible endpoints |
| DPA Status | Data Processing Agreement in place | | **With EU Residency** | Automatic ZDR for EU projects |
| Safeguards | Standard Contractual Clauses (SCCs) | | **Training Data** | API data NOT used for training (default) |
| Compliance | SOC 2 Type II, GDPR-compliant |
| Data Use | API data NOT used for model training |
**OpenAI DPA:** https://openai.com/policies/data-processing-agreement With EU data residency enabled, data is processed in-region and not stored at rest on OpenAI's servers.
## Audit Trail ### 7.5 Security Certifications
All AI processing is logged: OpenAI maintains the following security certifications:
```typescript | Certification | Scope |
await prisma.aIUsageLog.create({ |---------------|-------|
data: { | **SOC 2 Type 2** | Security, availability, confidentiality |
userId: ctx.user.id, // Who initiated | **ISO/IEC 27001** | Information security management |
action: 'FILTERING', // What type | **ISO/IEC 27017** | Cloud security controls |
entityType: 'Round', // What entity | **ISO/IEC 27018** | PII protection in cloud |
entityId: roundId, // Which entity | **ISO/IEC 27701** | Privacy information management |
model: 'gpt-4o', // What model
totalTokens: 1500, // Resource usage ### 7.6 Subprocessor Due Diligence
status: 'SUCCESS', // Outcome
}, | Assessment Area | Finding |
}) |-----------------|---------|
| **Security posture** | Strong - multiple certifications |
| **Privacy practices** | GDPR-aligned - DPA available |
| **Data handling** | Configurable - EU residency, ZDR available |
| **Training data** | Acceptable - API data not used by default |
| **Incident response** | Documented - breach notification procedures in DPA |
**Conclusion:** OpenAI is an acceptable subprocessor for anonymised data processing with appropriate configurations enabled.
---
## 8. Data Subject Rights
### 8.1 Rights Applicable to AI Processing
| Right | Applicability | Explanation |
|-------|---------------|-------------|
| **Access** | Limited | Anonymised data sent to AI is not personal data |
| **Rectification** | N/A | Anonymised data cannot be corrected as it's not attributed |
| **Erasure** | N/A | No personal data stored at AI provider |
| **Restriction** | Via objection | Can request exclusion from AI processing |
| **Portability** | N/A | Anonymised AI inputs are not personal data |
| **Object** | Yes | Can object to AI-assisted processing |
| **Automated decisions** | N/A | No solely automated decisions made |
### 8.2 Right to Object to AI Processing
Data subjects may object to having their project data processed by AI systems.
**Procedure:**
1. Submit objection to gdpr@monaco-opc.com
2. Objection acknowledged within 72 hours
3. Project excluded from AI processing
4. Manual review conducted instead
**Impact of objection:**
- Project will not be processed by AI filtering
- Jury assignment suggestions generated manually or algorithmically
- Award eligibility determined manually
- Mentor matching done manually
- No disadvantage to the data subject
### 8.3 Right to Explanation
Data subjects may request an explanation of how AI recommendations affected decisions about their project.
**Information provided:**
- Whether AI was used in processing their application
- What criteria the AI evaluated against
- The AI's recommendation (if approved by human reviewer)
- The human decision that was ultimately made
- The reasoning provided by the human decision-maker
**Note:** The actual AI model's internal reasoning is not interpretable. Explanations are based on the prompts used, the recommendations output, and the human reviewer's documented rationale.
### 8.4 Right to Human Review
All AI recommendations are subject to human review before implementation. Data subjects may also request:
- Confirmation that a human reviewed the AI recommendation
- The identity of the human reviewer (role, not personal identity)
- The outcome of the human review
---
## 9. Risk Assessment
### 9.1 Data Protection Impact Assessment Summary
A DPIA has been conducted for AI processing activities. Key findings:
| Risk Category | Risk | Likelihood | Severity | Mitigation | Residual Risk |
|---------------|------|------------|----------|------------|---------------|
| **PII Exposure** | Personal data sent to AI provider | Very Low | Medium | Anonymisation, validation | Very Low |
| **Re-identification** | AI provider re-identifies individuals | Very Low | Medium | Full anonymisation (not pseudonymisation) | Very Low |
| **Data Breach at AI Provider** | Breach exposes data | Low | Low | Anonymised data only; ZDR | Very Low |
| **Algorithmic Bias** | AI recommendations are biased | Medium | Medium | Human oversight, diverse training data | Low |
| **Incorrect Recommendations** | AI makes errors | Medium | Low | Human review before action | Low |
| **Model Training on Data** | Data used to train AI | Very Low | Medium | Contractual prohibition; opt-out default | Very Low |
### 9.2 Risk Mitigation Summary
| Risk | Primary Mitigation | Secondary Mitigation |
|------|-------------------|---------------------|
| PII Exposure | Automated anonymisation | Pre-transmission validation |
| Re-identification | Anonymisation (not pseudonymisation) | No additional data sent |
| Data Breach | Zero data retention | Anonymised data only |
| Algorithmic Bias | Human oversight | Documented criteria |
| Incorrect Recommendations | Mandatory human review | Algorithmic fallback |
| Model Training | Contractual terms | Technical opt-out |
### 9.3 Residual Risk Statement
After implementation of all mitigation measures, the residual risk of GDPR non-compliance in AI processing is assessed as **Very Low**. The primary reason is that no personal data is transmitted to the AI provider - only fully anonymised data that cannot be attributed to any identifiable natural person.
---
## 10. Audit & Monitoring
### 10.1 Audit Logging
All AI operations are logged in the `AIUsageLog` table:
| Field | Purpose |
|-------|---------|
| `createdAt` | Timestamp of AI operation |
| `userId` | Administrator who initiated operation |
| `action` | Type of AI operation (FILTERING, ASSIGNMENT, etc.) |
| `entityType` | Related entity type (Round, Award, etc.) |
| `entityId` | Related entity ID |
| `model` | AI model used |
| `promptTokens` | Input tokens consumed |
| `completionTokens` | Output tokens consumed |
| `totalTokens` | Total tokens consumed |
| `estimatedCostUsd` | Estimated cost in USD |
| `batchSize` | Number of items in batch |
| `itemsProcessed` | Number of items successfully processed |
| `status` | SUCCESS, PARTIAL, or ERROR |
| `errorMessage` | Error details if applicable |
### 10.2 Monitoring Metrics
| Metric | Alert Threshold | Purpose |
|--------|-----------------|---------|
| Error rate | >10% in 24 hours | Detect AI service issues |
| Token consumption | >$100/day | Cost control |
| Validation failures | Any | Detect PII leakage attempts |
| Processing time | >30 seconds/batch | Performance monitoring |
### 10.3 Regular Reviews
| Review | Frequency | Scope |
|--------|-----------|-------|
| AI usage audit | Monthly | Token usage, costs, error rates |
| Anonymisation validation | Quarterly | Sample review of AI inputs |
| DPIA review | Annually | Risk reassessment |
| Subprocessor review | Annually | OpenAI compliance status |
### 10.4 Audit Trail Retention
| Log Type | Retention | Purpose |
|----------|-----------|---------|
| AI usage logs | 12 months | Operational monitoring, cost tracking |
| Audit logs | 12 months | Security and compliance |
| Error logs | 30 days | Debugging |
---
## 11. Incident Response
### 11.1 AI-Specific Incident Types
| Incident Type | Description | Response |
|---------------|-------------|----------|
| **PII Transmission** | Personal data accidentally sent to AI | Immediate: Disable AI; Investigate; Assess breach |
| **Validation Failure** | Pre-transmission check fails | Automatic: Block transmission; Log; Alert |
| **AI Service Breach** | OpenAI reports data breach | Assess impact (likely none - anonymised data); Document |
| **Model Misbehaviour** | AI produces inappropriate content | Disable AI; Review outputs; Resume with modifications |
### 11.2 PII Transmission Response Procedure
If personal data is accidentally transmitted to AI:
1. **Immediate (0-1 hour):**
- Disable all AI processing
- Preserve logs and evidence
- Alert Data Protection Contact
2. **Assessment (1-24 hours):**
- Determine what data was sent
- Identify affected data subjects
- Assess risk level
- Contact OpenAI if deletion needed
3. **Notification (if required):**
- Follow breach notification procedure
- APDP notification if risk to data subjects
- Data subject notification if high risk
4. **Remediation:**
- Fix root cause
- Enhance validation
- Resume AI with additional safeguards
- Document lessons learned
### 11.3 Contact OpenAI
For urgent data-related issues:
- OpenAI Trust & Safety: Via API dashboard
- DPA-related requests: Via contract terms
---
## 12. Compliance Checklist
### 12.1 Technical Compliance
| Requirement | Status | Evidence |
|-------------|--------|----------|
| ✅ Data anonymisation implemented | Complete | `anonymization.ts` |
| ✅ PII validation before transmission | Complete | `validateNoPersonalData()` |
| ✅ EU data residency configured | To verify | OpenAI project settings |
| ✅ Zero data retention enabled | To verify | OpenAI project settings |
| ✅ TLS encryption for API calls | Complete | HTTPS enforcement |
| ✅ Audit logging implemented | Complete | `AIUsageLog` table |
| ✅ Error handling and fallbacks | Complete | Algorithmic fallbacks |
### 12.2 Organisational Compliance
| Requirement | Status | Evidence |
|-------------|--------|----------|
| ✅ Legal basis documented | Complete | This document, Section 3 |
| ✅ DPIA conducted | Complete | This document, Section 9 |
| ✅ Subprocessor due diligence | Complete | This document, Section 7 |
| ⚠️ DPA executed with OpenAI | Recommended | Standard API terms in use |
| ✅ Data subject rights procedures | Complete | Section 8 |
| ✅ Incident response procedures | Complete | Section 11 |
| ✅ Staff awareness | Ongoing | Training programme |
### 12.3 Documentation Compliance
| Document | Status | Location |
|----------|--------|----------|
| ✅ Platform GDPR compliance | Complete | `docs/gdpr/platform-gdpr-compliance.md` |
| ✅ AI data processing documentation | Complete | This document |
| ✅ AI system architecture | Complete | `docs/architecture/ai-system.md` |
| ✅ AI services reference | Complete | `docs/architecture/ai-services.md` |
| ✅ Processing records (ROPA) | To maintain | As required by Art. 30 |
---
## 13. Contact Information
### 13.1 Data Protection Contact
**Email:** gdpr@monaco-opc.com
For:
- Data subject rights requests related to AI processing
- Questions about AI data handling
- Reporting AI-related incidents
### 13.2 Technical Contact
For AI system technical issues, contact the Platform administrators.
### 13.3 Supervisory Authority
**Autorité de Protection des Données Personnelles (APDP)**
Principality of Monaco
---
## Appendices
### Appendix A: Sample Anonymised Data
Example of data sent to AI (actual personal data replaced with anonymised equivalents):
```json
{
"projects": [
{
"project_id": "P1",
"title": "Coral Reef Restoration Initiative",
"description": "Our project focuses on restoring damaged coral reefs using innovative bio-engineering techniques...",
"category": "STARTUP",
"ocean_issue": "HABITAT_RESTORATION",
"country": "Italy",
"region": "Mediterranean",
"institution": null,
"tags": ["coral", "reef", "restoration", "marine biology"],
"founded_year": 2022,
"team_size": 4,
"has_description": true,
"file_count": 3,
"file_types": ["PITCH_DECK", "VIDEO_PITCH"],
"wants_mentorship": true,
"submission_source": "MANUAL",
"submitted_date": "2026-01-15"
}
]
}
``` ```
## Data Subject Rights **Note:** No names, emails, phone numbers, URLs, or real IDs are included.
### Right of Access (Article 15) ### Appendix B: Related Documents
Users can request:
- What data was processed by AI
- When AI processing occurred
- What decisions were made
**Implementation:** Export AI usage logs for user's projects.
### Right to Erasure (Article 17)
When a user requests deletion:
- AI usage logs for their projects can be deleted
- No data remains at OpenAI (API data not retained for training)
**Note:** Since only anonymized data is sent to AI, there is no personal data at OpenAI to delete.
### Right to Object (Article 21)
Users can request to opt out of AI processing:
- Admin can disable AI features per round
- Manual review fallback available for all AI features
## Risk Assessment
### Risk: PII Leakage to AI Provider
| Factor | Assessment |
|--------|------------|
| Likelihood | Very Low |
| Impact | Medium |
| Mitigation | Automated PII detection, validation before every call |
| Residual Risk | Very Low |
### Risk: AI Decision Bias
| Factor | Assessment |
|--------|------------|
| Likelihood | Low |
| Impact | Low |
| Mitigation | Human review of all AI suggestions, algorithmic fallback |
| Residual Risk | Very Low |
### Risk: Data Breach at Subprocessor
| Factor | Assessment |
|--------|------------|
| Likelihood | Very Low |
| Impact | Low (only anonymized data) |
| Mitigation | OpenAI SOC 2 compliance, no PII sent |
| Residual Risk | Very Low |
## Compliance Checklist
- [x] Data minimization applied (only necessary fields)
- [x] PII stripped before AI processing
- [x] Anonymization validated before every API call
- [x] DPA in place with OpenAI
- [x] Audit logging of all AI operations
- [x] Fallback available when AI declined
- [x] Usage logs retained for 12 months only
- [x] No personal data stored at subprocessor
## Contact
For questions about AI data processing:
- Data Protection Officer: [DPO email]
- Technical Contact: [Tech contact email]
## See Also
- [Platform GDPR Compliance](./platform-gdpr-compliance.md) - [Platform GDPR Compliance](./platform-gdpr-compliance.md)
- [AI System Architecture](../architecture/ai-system.md) - [AI System Architecture](../architecture/ai-system.md)
- [AI Services Reference](../architecture/ai-services.md) - [AI Services Reference](../architecture/ai-services.md)
- [AI Configuration Guide](../architecture/ai-configuration.md)
- [AI Error Handling](../architecture/ai-errors.md)
### Appendix C: Legal References
- [GDPR - Regulation (EU) 2016/679](https://eur-lex.europa.eu/eli/reg/2016/679/oj)
- [Monaco Law 1.565 of December 3, 2024](https://en.gouv.mc/Policy-Practice/A-Modern-State/Protection-of-personal-data)
- [OpenAI Data Processing Addendum](https://openai.com/policies/data-processing-addendum/)
- [OpenAI EU Data Residency](https://openai.com/index/introducing-data-residency-in-europe/)
- [OpenAI Enterprise Privacy](https://openai.com/enterprise-privacy/)
---
**Document Control**
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | January 2025 | Initial version |
| 2.0 | February 2026 | Comprehensive revision: Added definitions, expanded legal framework, detailed technical implementation, enhanced risk assessment, added compliance checklist |

File diff suppressed because it is too large Load Diff