General Purpose AI (GPAI) Compliance Framework
Document Type: Compliance Standard
Framework Alignment: EU AI Act Articles 51-56, GPAI Code of Practice (July 2025)
Effective Date: 2026-01-15
Version: 1.0
Regulatory Status: Mandatory (EU AI Act obligations effective August 2, 2025)
1. Executive Summary
1.1 Purpose
This framework establishes compliance requirements for organizations that:
- Provide GPAI models (developers, fine-tuners, distributors)
- Use GPAI models (deployers, integrators)
- Integrate GPAI into high-risk AI systems
1.2 Critical Deadlines
| Date | Requirement | Applies To |
|---|---|---|
| August 2, 2025 | GPAI obligations in force | All new GPAI models |
| August 2, 2026 | Commission enforcement powers active | All GPAI providers |
| August 2, 2027 | Legacy GPAI compliance | Models placed before Aug 2025 |
1.3 Penalty Framework
| Violation Type | Maximum Fine |
|---|---|
| GPAI transparency violations | €15M or 3% global turnover |
| GPAI systemic risk violations | €35M or 7% global turnover |
| Supplying incorrect information | €7.5M or 1.5% global turnover |
2. GPAI Classification
2.1 Definition (EU AI Act Article 3(63))
A General-Purpose AI Model is an AI model that:
- Is trained with a large amount of data using self-supervision at scale
- Displays significant generality
- Is capable of competently performing a wide range of distinct tasks
- Can be integrated into a variety of downstream systems or applications
2.2 Classification Criteria
| Criterion | Threshold | Classification |
|---|---|---|
| Training Compute | ≥10²³ FLOPS | Standard GPAI |
| Training Compute | ≥10²⁵ FLOPS | Systemic Risk GPAI |
| Parameters | ≥1 billion (indicative) | Consider GPAI |
| Output Types | Text, image, audio, video, code | Likely GPAI |
| Task Scope | Wide range of distinct tasks | GPAI indicator |
2.3 Classification Decision Tree
┌─────────────────────────────────────────────────────────────┐
│ Does the model display "significant generality"? │
│ (Can it perform a wide range of distinct tasks?) │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
▼ ▼
[ YES ] [ NO ]
│ │
▼ ▼
┌─────────────────────────┐ ┌─────────────────────────────┐
│ Training Compute │ │ NOT GPAI │
│ ≥10²³ FLOPS? │ │ Standard AI system rules │
└─────────────────────────┘ │ may still apply │
│ └─────────────────────────────┘
┌─────────┴─────────┐
▼ ▼
[ YES ] [ NO ]
│ │
▼ ▼
┌────────────────┐ ┌──────────────────────────────────────┐
│ GPAI MODEL │ │ Evaluate other indicators: │
│ │ │ - Parameters ≥1B │
│ Check systemic │ │ - Self-supervised training │
│ risk threshold │ │ - Multiple output modalities │
└────────────────┘ │ - Wide task applicability │
│ │ │
▼ │ If multiple indicators → GPAI │
┌────────────────┐ │ If specialized → NOT GPAI │
│ ≥10²⁵ FLOPS? │ └──────────────────────────────────────┘
└────────────────┘
│
┌───┴───┐
▼ ▼
YES NO
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ SYSTEMIC RISK │ │ STANDARD GPAI │
│ GPAI MODEL │ │ MODEL │
│ │ │ │
│ Articles 51-55 │ │ Article 53 │
│ + Safety Chapter│ │ Transparency │
│ + AI Office │ │ Requirements │
└─────────────────┘ └─────────────────┘
2.4 Exclusions from GPAI
Models are NOT GPAI if they are specialized for:
- Speech transcription only
- Image upscaling only
- Weather forecasting only
- Gaming applications only
- Other single-purpose tasks without general capability
3. Obligations by Role
3.1 GPAI Provider Obligations (Article 53)
All GPAI providers must:
| # | Obligation | Evidence Required | Timeline |
|---|---|---|---|
| 1 | Maintain technical documentation | Model Documentation Form | Before placement |
| 2 | Provide information to downstream providers | Integration documentation | On request |
| 3 | Establish copyright compliance policy | robots.txt policy, opt-out procedures | Before placement |
| 4 | Publish training data summary | Public summary using AI Office template | Before placement |
3.2 Systemic Risk GPAI Additional Obligations (Article 55)
GPAI with systemic risk must additionally:
| # | Obligation | Evidence Required | Timeline |
|---|---|---|---|
| 5 | Perform model evaluation | Evaluation methodology and results | Ongoing |
| 6 | Assess and mitigate systemic risks | Risk assessment documentation | Ongoing |
| 7 | Track and report serious incidents | Incident log, AI Office reports | Within deadlines |
| 8 | Ensure adequate cybersecurity | Security framework documentation | Ongoing |
| 9 | Notify AI Office of systemic risk models | Notification within 2 weeks | Before/at threshold |
3.3 GPAI Deployer Obligations
Organizations using GPAI in their systems:
| Obligation | SMB Approach | Enterprise Approach |
|---|---|---|
| Inventory GPAI usage | Spreadsheet tracking | GRC system integration |
| Verify provider compliance | Request documentation | Contractual requirements |
| Implement transparency | User disclosures | AI labeling system |
| Monitor for incidents | Manual review | Automated monitoring |
| Maintain documentation | Simplified System Card | Full AI-BOM |
4. Technical Documentation Requirements
4.1 Model Documentation Form
The GPAI Code of Practice specifies a standardized Model Documentation Form:
Section A: General Information
| Field | Required Content |
|---|---|
| Model Name | Official model identifier |
| Version | Version number and date |
| Provider | Legal entity name and contact |
| Release Date | Date of market placement |
| Model Type | Architecture category |
Section B: Technical Specifications
| Field | Required Content |
|---|---|
| Architecture | Model architecture description |
| Parameters | Total parameter count |
| Training Compute | Total FLOPS used for training |
| Hardware Used | GPU/TPU types and quantities |
| Training Duration | Total training time |
| Energy Consumption | Estimated energy use (kWh) |
Section C: Training Data
| Field | Required Content |
|---|---|
| Data Sources | Categories of training data |
| Data Volume | Approximate size of training corpus |
| Languages | Languages represented |
| Temporal Range | Date range of source data |
| Synthetic Data | Proportion of synthetic data |
Section D: Capabilities and Limitations
| Field | Required Content |
|---|---|
| Intended Uses | Designed applications |
| Known Limitations | Performance boundaries |
| Prohibited Uses | Uses the provider prohibits |
| Evaluation Results | Key benchmark results |
4.2 Training Data Summary Template
Required public disclosure (Article 53(1)(d)):
# Training Data Summary for [Model Name]
## 1. General Description
[High-level description of the training data used]
## 2. Data Categories
- Text: [Yes/No, approximate proportion]
- Code: [Yes/No, approximate proportion]
- Images: [Yes/No, approximate proportion]
- Audio: [Yes/No, approximate proportion]
- Video: [Yes/No, approximate proportion]
- Structured Data: [Yes/No, approximate proportion]
## 3. Data Sources
- Web Crawl Data: [Description, approximate proportion]
- Licensed Datasets: [Categories, no specific names required]
- Proprietary Data: [Categories]
- Synthetic Data: [Description, approximate proportion]
- User-Generated Content: [If applicable]
## 4. Geographic and Linguistic Coverage
- Primary Languages: [List]
- Geographic Representation: [Regions well-represented]
- Known Gaps: [Areas with limited coverage]
## 5. Data Collection Period
- Start Date: [Approximate]
- End Date: [Approximate]
- Knowledge Cutoff: 2025-04-01
## 6. Data Processing
- Filtering Methods: [High-level description]
- Quality Controls: [High-level description]
- Deduplication: [Yes/No]
## 7. Copyright Compliance
- robots.txt Compliance: [Yes/No, methodology]
- Opt-Out Mechanism: [Description if available]
- Copyright Policy: [Reference to policy]
## 8. Notable Exclusions
[Categories of content explicitly excluded]
## 9. Limitations
[Known limitations of the training data]
---
Publication Date: 2026-01-15
Provider: [Legal entity name]
5. Copyright Compliance Framework
5.1 Requirements (EU AI Act Article 53(1)(c))
GPAI providers must establish policies to comply with Union copyright law, including:
- Directive (EU) 2019/790 (Copyright in the Digital Single Market)
- Identification and respect of rights reservations (Article 4)
- Text and data mining exceptions (Articles 3 and 4)
5.2 Implementation Requirements
| Requirement | Implementation | Evidence |
|---|---|---|
| robots.txt Compliance | Respect website exclusions during training | Crawler logs, exclusion lists |
| Opt-Out Mechanism | Machine-readable reservation identification | Technical documentation |
| Rights Holder Requests | Process for handling opt-out requests | Request handling procedure |
| Documentation | Maintain records of compliance efforts | Compliance documentation |
5.3 Copyright Compliance Checklist
- Establish written copyright compliance policy
- Implement robots.txt compliance in data collection
- Create machine-readable opt-out mechanism
- Document sources and collection methodology
- Establish process for rights holder requests
- Maintain evidence of compliance efforts
- Regular audit of compliance procedures
6. Systemic Risk Assessment
6.1 Systemic Risk Indicators
Models presumed to have systemic risk if:
- Training compute ≥ 10²⁵ FLOPS
- Designated by Commission decision
Additional indicators to evaluate:
- High-impact capabilities across multiple domains
- Potential for significant negative effects on public health, safety, or fundamental rights
- Wide deployment potential
- Ability to generate content at scale
6.2 Risk Assessment Framework
| Risk Category | Assessment Questions | Mitigation Required |
|---|---|---|
| Safety | Could outputs cause physical harm? | Safety filters, content policies |
| Security | Could the model assist cyber attacks? | Red teaming, capability restrictions |
| Fundamental Rights | Could outputs discriminate or violate rights? | Bias testing, fairness evaluation |
| Democratic Processes | Could outputs influence elections/democracy? | Misinformation controls |
| Public Health | Could outputs provide dangerous health advice? | Medical content filters |
| Environment | What is the environmental impact? | Energy efficiency measures |
6.3 Notification Requirements
Systemic Risk GPAI providers must notify AI Office:
| Trigger | Timeline | Method |
|---|---|---|
| Reasonably foreseeing 10²⁵ FLOPS threshold | Within 2 weeks | EU SEND platform |
| Reaching 10²⁵ FLOPS threshold | Within 2 weeks | EU SEND platform |
| Serious incident | Without undue delay | EU SEND platform |
| Commission designation | Within 2 weeks of designation | EU SEND platform |
6.4 Safety and Security Model Report Template
Required for Systemic Risk GPAI providers (Code of Practice Measures 1.4, 7.7)
This template documents the safety and security framework for systemic risk GPAI models and serves as the basis for AI Office submissions.
# Safety and Security Model Report
## Report Metadata
| **Field** | **Value** |
|-----------|----------|
| Report ID | [SSMR-YYYY-NNN] |
| Model Name | [Official model identifier] |
| Model Version | [Version number] |
| Provider | [Legal entity name] |
| Report Date | [YYYY-MM-DD] |
| Report Version | [e.g., 1.0] |
| Classification | [ ] Initial Report [ ] Annual Update [ ] Post-Incident Update |
| Submission Status | [ ] Draft [ ] Submitted to AI Office [ ] Accepted |
---
## 1. Model Identification and Classification
### 1.1 Systemic Risk Classification
| **Criterion** | **Value** | **Evidence** |
|--------------|----------|--------------|
| Training Compute | [X.XX × 10²⁵ FLOPS] | [Training logs reference] |
| Parameter Count | [XXB parameters] | [Model card reference] |
| Commission Designation | [ ] Yes [ ] No | [Designation reference if applicable] |
| Designation Date | [YYYY-MM-DD or N/A] | |
### 1.2 Model Capabilities
| **Capability Domain** | **Assessment** | **Risk Level** |
|----------------------|----------------|----------------|
| Text Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Code Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Image Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Audio Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Video Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Reasoning/Planning | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Tool Use/Agency | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
---
## 2. Safety Framework Summary
### 2.1 Safety Governance
| **Element** | **Description** |
|------------|-----------------|
| Safety Team | [Team structure and size] |
| Safety Lead | [Name, title] |
| Reporting Line | [Reports to whom] |
| Safety Board | [Composition if applicable] |
| Safety Budget | [Annual allocation range] |
### 2.2 Safety Policies
| **Policy** | **Version** | **Last Updated** | **Summary** |
|-----------|------------|------------------|-------------|
| Responsible AI Policy | | | |
| Model Release Policy | | | |
| Incident Response Policy | | | |
| Red Team Policy | | | |
### 2.3 Pre-Deployment Safety Measures
| **Measure** | **Implementation** | **Status** |
|------------|-------------------|-----------|
| Safety Training Data Curation | [Description] | [ ] Complete |
| RLHF/Constitutional AI | [Description] | [ ] Complete |
| Output Filtering | [Description] | [ ] Complete |
| Refusal Training | [Description] | [ ] Complete |
| Capability Limitations | [Description] | [ ] Complete |
### 2.4 Post-Deployment Safety Measures
| **Measure** | **Implementation** | **Status** |
|------------|-------------------|-----------|
| Usage Monitoring | [Description] | [ ] Active |
| Abuse Detection | [Description] | [ ] Active |
| User Reporting Mechanism | [Description] | [ ] Active |
| Rapid Response Capability | [Description] | [ ] Active |
| Model Update Mechanism | [Description] | [ ] Active |
---
## 3. Security Measures
### 3.1 Model Security
| **Control** | **Implementation** | **Verification** |
|------------|-------------------|-----------------|
| Model Weights Protection | [Encryption, access controls] | [Audit date] |
| Inference API Security | [Authentication, rate limiting] | [Pentest date] |
| Model Extraction Defense | [Technical measures] | [Assessment date] |
| Prompt Injection Defense | [Technical measures] | [Test date] |
| Jailbreak Mitigation | [Technical measures] | [Test date] |
### 3.2 Infrastructure Security
| **Control** | **Standard** | **Status** |
|------------|-------------|-----------|
| ISO 27001 Certification | [ ] Certified [ ] In Progress | [Cert date/ETA] |
| SOC 2 Type II | [ ] Certified [ ] In Progress | [Report date] |
| Cloud Security | [CSP certifications] | [Details] |
| Data Center Security | [Physical security measures] | [Audit date] |
### 3.3 Supply Chain Security
| **Component** | **Security Measure** | **Verification** |
|--------------|---------------------|-----------------|
| Training Data Pipeline | [Access controls, integrity checks] | |
| Dependency Management | [SBOM, vulnerability scanning] | |
| Model Registry | [Signing, verification] | |
| Deployment Pipeline | [CI/CD security] | |
---
## 4. Model Evaluation Results
### 4.1 Safety Benchmarks
| **Benchmark** | **Score** | **Date** | **Threshold** |
|--------------|----------|---------|---------------|
| TruthfulQA | | | |
| BBQ (Bias Benchmark) | | | |
| ToxiGen | | | |
| RealToxicityPrompts | | | |
| HELM Safety Metrics | | | |
| [Internal Safety Benchmark] | | | |
### 4.2 Capability Evaluations
| **Domain** | **Benchmark** | **Performance** | **Concern Level** |
|-----------|--------------|-----------------|------------------|
| CBRN Knowledge | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Cyber Capabilities | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Persuasion/Manipulation | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Deception Capability | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Autonomous Action | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
---
## 5. Red Teaming Summary
### 5.1 Red Team Program
| **Element** | **Description** |
|------------|-----------------|
| Team Composition | [Internal/external, size] |
| Methodology | [Approach description] |
| Frequency | [Ongoing/periodic] |
| Scope | [Domains covered] |
### 5.2 Red Team Findings Summary
| **Category** | **Findings** | **Severity** | **Remediation Status** |
|-------------|-------------|-------------|----------------------|
| Jailbreaks | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Harmful Content Generation | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Privacy Violations | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Security Vulnerabilities | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Bias/Discrimination | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
### 5.3 External Red Team Engagements
| **Engagement** | **Firm/Team** | **Date** | **Scope** | **Report Available** |
|---------------|--------------|---------|----------|---------------------|
| | | | | [ ] Yes [ ] No |
| | | | | [ ] Yes [ ] No |
---
## 6. Risk Mitigation Measures
### 6.1 Identified Systemic Risks
| **Risk ID** | **Risk Description** | **Likelihood** | **Impact** | **Overall Rating** |
|------------|---------------------|---------------|-----------|-------------------|
| SR-001 | [Description] | [H/M/L] | [H/M/L] | [Critical/High/Medium/Low] |
| SR-002 | [Description] | [H/M/L] | [H/M/L] | [Critical/High/Medium/Low] |
### 6.2 Mitigation Controls
| **Risk ID** | **Mitigation Measure** | **Implementation Status** | **Effectiveness** |
|------------|----------------------|--------------------------|------------------|
| SR-001 | [Control description] | [ ] Implemented [ ] Planned | [ ] Verified [ ] Pending |
| SR-002 | [Control description] | [ ] Implemented [ ] Planned | [ ] Verified [ ] Pending |
### 6.3 Residual Risk Assessment
| **Risk ID** | **Residual Risk** | **Acceptance Authority** | **Review Date** |
|------------|------------------|-------------------------|----------------|
| SR-001 | [H/M/L] | [Role/Name] | [YYYY-MM-DD] |
| SR-002 | [H/M/L] | [Role/Name] | [YYYY-MM-DD] |
---
## 7. Incident Tracking
### 7.1 Incident Summary (Reporting Period)
| **Metric** | **Value** |
|-----------|----------|
| Total Incidents Reported | |
| Serious Incidents (AI Office notified) | |
| Safety-Related Incidents | |
| Security-Related Incidents | |
| Mean Time to Detect | |
| Mean Time to Respond | |
### 7.2 Serious Incidents (If Any)
| **Incident ID** | **Date** | **Category** | **AI Office Notification** | **Status** |
|----------------|---------|-------------|---------------------------|-----------|
| | | | [Date notified] | [ ] Open [ ] Closed |
### 7.3 Lessons Learned
| **Incident Pattern** | **Root Cause** | **Improvement Implemented** |
|---------------------|---------------|---------------------------|
| | | |
---
## 8. Continuous Improvement
### 8.1 Safety Roadmap
| **Initiative** | **Target Date** | **Status** |
|---------------|----------------|-----------|
| [Safety improvement 1] | | [ ] Planned [ ] In Progress [ ] Complete |
| [Safety improvement 2] | | [ ] Planned [ ] In Progress [ ] Complete |
### 8.2 Upcoming Evaluations
| **Evaluation** | **Planned Date** | **Scope** |
|---------------|-----------------|----------|
| | | |
---
## 9. AI Office Submission Information
### 9.1 Submission Record
| **Submission** | **Date** | **Platform** | **Receipt ID** |
|---------------|---------|-------------|---------------|
| Initial Notification | | EU SEND | |
| This Report | | EU SEND | |
### 9.2 Contact Information
| **Role** | **Name** | **Email** | **Phone** |
|---------|---------|----------|----------|
| Primary Contact | | | |
| Safety Lead | | | |
| Legal Representative | | | |
---
## 10. Attestation
I hereby attest that the information provided in this Safety and Security Model Report is accurate and complete to the best of my knowledge.
| **Role** | **Name** | **Signature** | **Date** |
|---------|---------|--------------|---------|
| CEO/Authorized Representative | | | |
| Chief Safety Officer | | | |
| CISO | | | |
---
Report Classification: [Confidential - AI Office Submission]
Document Control: [Version-controlled, retention 10+ years]
7. GPAI Code of Practice Alignment
7.1 Code Structure
The GPAI Code of Practice (July 2025) provides three chapters:
| Chapter | Applies To | Key Measures |
|---|---|---|
| Transparency | All GPAI | Model documentation, downstream info |
| Copyright | All GPAI | robots.txt, opt-out, policy |
| Safety & Security | Systemic Risk only | Evaluation, risk management |
7.2 Compliance Pathways
| Pathway | Description | Benefit |
|---|---|---|
| Code Signatory | Sign and follow Code of Practice | Presumption of compliance |
| Independent Demonstration | Document alternative compliance | Requires detailed justification |
| Hybrid | Follow Code + additional measures | Enhanced compliance posture |
7.3 Key Code Measures
Transparency Chapter (All GPAI):
- Measure 1.1: Complete Model Documentation Form
- Measure 1.2: Provide downstream provider information
- Measure 1.3: Maintain documentation updates
- Measure 1.4: Submit Safety and Security Model Report (systemic only) - See Section 6.4 Template
Copyright Chapter (All GPAI):
- Measure 2.1: Establish copyright compliance policy
- Measure 2.2: Implement robots.txt compliance
- Measure 2.3: Create opt-out mechanism
- Measure 2.4: Maintain compliance documentation
Safety & Security Chapter (Systemic Risk only):
- Measure 7.1: Establish Safety and Security Framework - See Section 6.4 Template
- Measure 7.2: Conduct model evaluations - See Section 6.4, Part 4
- Measure 7.3: Perform red teaming - See Section 6.4, Part 5
- Measure 7.4: Track and report incidents - See Section 6.4, Part 7
- Measure 7.5: Implement cybersecurity measures - See Section 6.4, Part 3
- Measure 7.7: Submit Model Report to AI Office - See Section 6.4, Part 9
8. SMB Implementation Guide
8.1 SMB-Specific Considerations
Small and medium businesses face unique challenges with GPAI compliance. This section provides proportionate implementation guidance.
8.2 SMB Compliance Matrix
| Obligation | Enterprise Approach | SMB Approach |
|---|---|---|
| Model Documentation | Full documentation team | Template-based, owner-maintained |
| Training Data Summary | Detailed tracking system | Simplified template |
| Copyright Compliance | Dedicated legal review | Policy template + spot checks |
| Incident Reporting | Automated monitoring | Manual review process |
| AI-BOM Maintenance | GRC system integration | Spreadsheet/basic tooling |
8.3 SMB Quick-Start Checklist
Week 1-2: Assessment
- Identify all GPAI models in use
- Classify each as Standard or Systemic Risk
- Identify your role (Provider, Deployer, Integrator)
Week 3-4: Documentation
- Complete Model Documentation Form for each GPAI
- Create Training Data Summary if provider
- Establish copyright compliance policy
Week 5-6: Implementation
- Implement required technical measures
- Set up basic monitoring
- Train relevant staff
Ongoing:
- Quarterly documentation review
- Incident monitoring and reporting
- Compliance updates tracking
9. Enterprise Implementation Guide
9.1 Governance Structure
| Role | GPAI Responsibilities |
|---|---|
| AI Risk Officer | GPAI compliance program ownership |
| Legal Counsel | Copyright compliance, contract review |
| Security Team | Systemic risk assessment, red teaming |
| Data Governance | Training data documentation |
| AI Governance Board | GPAI deployment approvals |
9.2 Enterprise Compliance Program
Phase 1: Foundation (Month 1)
- Establish GPAI inventory
- Classify all GPAI by risk level
- Assign ownership for each GPAI system
Phase 2: Documentation (Month 2-3)
- Complete Model Documentation Forms
- Create Training Data Summaries
- Establish AI-BOM for each system
Phase 3: Controls (Month 4-6)
- Implement technical controls
- Establish monitoring systems
- Create incident response procedures
Phase 4: Assurance (Ongoing)
- Regular compliance audits
- Documentation updates
- Regulatory monitoring
9.3 Integration with Existing Frameworks
| Framework | GPAI Integration Point |
|---|---|
| NIST AI RMF | MAP function - GPAI classification |
| ISO/IEC 42001 | Clause 8 - AI operations |
| ISO/IEC 27001 | A.8 - Asset management |
| SOC 2 | CC6 - System operations |
10. Audit and Evidence Requirements
10.1 Required Evidence Repository
| Evidence Type | Retention Period | Format |
|---|---|---|
| Model Documentation Form | Life of model + 10 years | PDF, version controlled |
| Training Data Summary | Life of model + 10 years | Public publication |
| Copyright Compliance Records | 10 years | Internal documentation |
| Incident Reports | 10 years | Structured records |
| AI Office Notifications | 10 years | EU SEND receipts |
10.2 Audit Checklist
For AI Office Audit Readiness:
- Model Documentation Form complete and current
- Training Data Summary published
- Copyright compliance policy documented
- Evidence of robots.txt compliance
- Downstream provider information available
- (Systemic Risk) Safety and Security Framework documented
- (Systemic Risk) Model evaluations conducted
- (Systemic Risk) Red teaming results available
- (Systemic Risk) Incident log maintained
11. Document Control
11.1 Version History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
| 1.1 | 2026-01-16 | AI Governance Office | Added Safety and Security Model Report Template (Section 6.4) |
11.2 Approvals
| Role | Name | Date |
|---|---|---|
| AI Risk Officer | ||
| Legal Counsel | ||
| CISO |
Appendix A: GPAI Regulatory Timeline
| Date | Event | Action Required |
|---|---|---|
| Aug 1, 2024 | EU AI Act enters into force | Awareness |
| Feb 2, 2025 | Prohibited practices banned | Review AI uses |
| Aug 2, 2025 | GPAI obligations in force | Full compliance (new models) |
| Aug 2, 2026 | Commission enforcement active | Audit readiness |
| Aug 2, 2027 | Legacy GPAI compliance | Retrofit existing models |
Appendix B: EU SEND Platform Usage
The EU SEND platform is used for submitting GPAI-related documents to the AI Office:
Documents to submit:
- Systemic risk notifications (Article 51(2), 52(2))
- Reassessment requests (Article 52(5))
- Serious incident reports (Article 55(1)(c))
- Safety and Security Framework (Code of Practice Measure 1.4)
- Model Report (Code of Practice Measure 7.7)
Access: https://send.ec.europa.eu (requires EU Login)
Classification: Internal
Review Frequency: Quarterly (during initial implementation), then Annual
CODITECT AI Risk Management Framework
Document ID: AI-RMF-14 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel