This document describes how the Monaco Ocean Protection Challenge (MOPC) Platform uses Artificial Intelligence (AI) services while maintaining strict compliance with the General Data Protection Regulation (GDPR) and Monaco Law 1.565 of December 3, 2024.
### Key Compliance Measures
| Measure | Implementation |
|---------|----------------|
| **Data Minimisation** | Only necessary, non-identifying data sent to AI |
| **Anonymisation** | All personal identifiers stripped before AI processing |
| **EU Data Residency** | AI processing occurs within EU (Ireland) |
| **Zero Data Retention** | AI provider does not store data at rest |
| **Human Oversight** | AI provides recommendations only; humans make final decisions |
| **Audit Trail** | All AI operations logged for accountability |
### Fundamental Principle
**No personal data is transmitted to AI services.** All data sent to OpenAI is fully anonymised, meaning it cannot be attributed to any identifiable natural person. Anonymised data is not considered personal data under GDPR.
---
## 2. Definitions
In addition to the definitions in the [Platform GDPR Compliance](./platform-gdpr-compliance.md) document, the following AI-specific definitions apply:
| Term | Definition |
|------|------------|
| **Artificial Intelligence (AI)** | Computer systems capable of performing tasks that typically require human intelligence, such as pattern recognition, natural language understanding, and decision-making support. |
| **Large Language Model (LLM)** | A type of AI model trained on large amounts of text data to understand and generate human language. OpenAI's GPT models are examples of LLMs. |
| **AI Service** | A component of the Platform that uses AI to process data and provide recommendations or analysis. |
| **Anonymised Data** | Data that has been processed in such a way that the data subject is not or no longer identifiable. Under GDPR, anonymised data is not personal data. |
| **Pseudonymised Data** | Data processed so that it can no longer be attributed to a specific data subject without additional information kept separately. Unlike anonymised data, pseudonymised data is still personal data under GDPR. |
| **Token** | A unit of text processed by an LLM. Approximately 1 token = 4 characters in English. Token usage determines AI processing costs. |
| **Zero Data Retention (ZDR)** | A configuration where the AI provider does not store input or output data at rest after processing is complete. |
| **EU Data Residency** | A configuration ensuring that data is processed within the European Union and does not leave EU jurisdiction. |
| **Prompt** | The text input sent to an AI model, consisting of instructions and data to be processed. |
| **Completion** | The text output generated by an AI model in response to a prompt. |
---
## 3. Legal Framework
### 3.1 Legal Basis for AI Processing
AI-assisted processing activities are conducted under the following legal bases:
- **Relationship:** Direct relationship through competition participation
- **Safeguards in place:**
- Full anonymisation (not pseudonymisation)
- EU data residency
- Zero data retention at AI provider
- Human oversight of all AI recommendations
- Right to object and request manual processing
#### Conclusion
The legitimate interests of the organisation are not overridden by the interests, rights, or freedoms of the data subjects. Processing may proceed with the implemented safeguards.
### 3.3 Article 22 - Automated Decision-Making
**Statement:** The Platform's AI processing does **not** constitute automated decision-making as defined in Article 22 of the GDPR.
Article 22(1) states: *"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."*
**Why Article 22 does not apply:**
1.**Not solely automated:** All AI outputs are recommendations reviewed and approved by human administrators. No decision is made without human involvement.
2.**No legal effects:** AI recommendations do not directly produce legal effects on data subjects. Humans make the final decisions about project advancement, jury assignments, and award eligibility.
3.**No significant effects:** The interim recommendations produced by AI do not, by themselves, significantly affect data subjects. Only the final human decisions have such effects.
4.**Anonymised data:** The data processed by AI is anonymised, meaning Article 22 protections for personal data processing do not apply to the AI processing stage itself.
**Safeguards implemented regardless:**
- Human review of all AI recommendations before implementation
- Right to request explanation of AI-assisted decisions
- Right to request fully manual processing
- Audit logging of AI recommendations and human decisions
---
## 4. AI Processing Activities
### 4.1 Overview of AI Services
The Platform uses AI for four distinct processing activities:
| Service | Purpose | Input Data | Output |
|---------|---------|------------|--------|
| **Project Filtering** | Evaluate projects against admin-defined criteria | Anonymised project data | Pass/fail recommendations with confidence scores |
| **Jury Assignment** | Match jury expertise to project topics | Anonymised juror and project data | Assignment suggestions with match scores |
| **Award Eligibility** | Determine eligibility for special awards | Anonymised project data | Eligibility determinations with reasoning |
| **Mentor Matching** | Recommend mentors for projects | Anonymised mentor and project data | Ranked mentor recommendations |
### 4.2 AI Project Filtering
**Purpose:** Assist administrators in screening projects against specific criteria (e.g., "Projects must have ocean conservation focus", "Exclude projects without descriptions").
**Process:**
1. Administrator defines criteria in plain language
2. System anonymises project data (see Section 5)
3. Anonymised data sent to AI with criteria
4. AI returns recommendations with confidence scores
5. Administrator reviews and approves/modifies recommendations
6. Results applied to projects
**Human Oversight:** Administrator reviews all AI recommendations before application. Projects flagged by AI as "uncertain" require manual review.
### 4.3 AI Jury Assignment
**Purpose:** Suggest optimal juror-project pairings based on expertise alignment.
**Process:**
1. System anonymises juror expertise tags and project data
2. Anonymised data sent to AI with assignment constraints
3. AI returns suggested pairings with match scores and reasoning
4. Administrator reviews suggestions
5. Administrator approves, modifies, or rejects assignments
6. Approved assignments created in system
**Human Oversight:** All assignments require explicit administrator approval. AI suggestions can be partially accepted or entirely rejected.
### 4.4 AI Award Eligibility
**Purpose:** Assist in determining which projects meet special award criteria.
**Process:**
1. Award criteria defined (may include rule-based and AI-interpreted criteria)
2. System anonymises project data
3. Anonymised data sent to AI with criteria
4. AI returns eligibility determinations with reasoning
5. Administrator reviews determinations
6. Final eligibility set by administrator
**Human Oversight:** Administrator has final authority on all eligibility decisions. AI reasoning is transparent and reviewable.
### 4.5 AI Mentor Matching
**Purpose:** Recommend suitable mentors for selected projects based on expertise.
**Process:**
1. System anonymises mentor profiles and project data
2. Anonymised data sent to AI
3. AI returns ranked mentor recommendations with reasoning
4. Administrator reviews recommendations
5. Assignments made by administrator or offered to mentors
**Human Oversight:** Mentor assignments require administrator approval and mentor acceptance.
---
## 5. Data Minimisation & Anonymisation
### 5.1 Principles Applied
The Platform applies the following GDPR principles to AI processing:
| Definition | Data can be attributed to individual with additional info | Data cannot be attributed to any individual |
| GDPR Status | Still personal data | Not personal data |
| Example | User123 → Real user (with mapping) | P1, P2 → No mapping to individuals |
| Our implementation | ❌ Not used | ✅ Used |
The Platform uses **anonymisation**, not pseudonymisation. The sequential IDs (P1, P2) used in AI processing cannot be mapped back to individuals by the AI provider or any external party. The mapping exists only within the Platform's secure environment and is used solely to apply AI recommendations to the correct records.
**Recommendation:** Execute the formal OpenAI Data Processing Addendum for enhanced contractual protection, even though only anonymised data is transmitted.
**Reference:** [OpenAI Data Processing Addendum](https://openai.com/policies/data-processing-addendum/)
### 7.3 EU Data Residency
OpenAI offers EU data residency for API customers:
**Note:** The actual AI model's internal reasoning is not interpretable. Explanations are based on the prompts used, the recommendations output, and the human reviewer's documented rationale.
After implementation of all mitigation measures, the residual risk of GDPR non-compliance in AI processing is assessed as **Very Low**. The primary reason is that no personal data is transmitted to the AI provider - only fully anonymised data that cannot be attributed to any identifiable natural person.