Skip to main content

General Purpose AI (GPAI) Compliance Framework

Document Type: Compliance Standard
Framework Alignment: EU AI Act Articles 51-56, GPAI Code of Practice (July 2025)
Effective Date: 2026-01-15
Version: 1.0
Regulatory Status: Mandatory (EU AI Act obligations effective August 2, 2025)


1. Executive Summary

1.1 Purpose

This framework establishes compliance requirements for organizations that:

  • Provide GPAI models (developers, fine-tuners, distributors)
  • Use GPAI models (deployers, integrators)
  • Integrate GPAI into high-risk AI systems

1.2 Critical Deadlines

DateRequirementApplies To
August 2, 2025GPAI obligations in forceAll new GPAI models
August 2, 2026Commission enforcement powers activeAll GPAI providers
August 2, 2027Legacy GPAI complianceModels placed before Aug 2025

1.3 Penalty Framework

Violation TypeMaximum Fine
GPAI transparency violations€15M or 3% global turnover
GPAI systemic risk violations€35M or 7% global turnover
Supplying incorrect information€7.5M or 1.5% global turnover

2. GPAI Classification

2.1 Definition (EU AI Act Article 3(63))

A General-Purpose AI Model is an AI model that:

  • Is trained with a large amount of data using self-supervision at scale
  • Displays significant generality
  • Is capable of competently performing a wide range of distinct tasks
  • Can be integrated into a variety of downstream systems or applications

2.2 Classification Criteria

CriterionThresholdClassification
Training Compute≥10²³ FLOPSStandard GPAI
Training Compute≥10²⁵ FLOPSSystemic Risk GPAI
Parameters≥1 billion (indicative)Consider GPAI
Output TypesText, image, audio, video, codeLikely GPAI
Task ScopeWide range of distinct tasksGPAI indicator

2.3 Classification Decision Tree

┌─────────────────────────────────────────────────────────────┐
│ Does the model display "significant generality"? │
│ (Can it perform a wide range of distinct tasks?) │
└─────────────────────────────────────────────────────────────┘

┌───────────────┴───────────────┐
▼ ▼
[ YES ] [ NO ]
│ │
▼ ▼
┌─────────────────────────┐ ┌─────────────────────────────┐
│ Training Compute │ │ NOT GPAI │
│ ≥10²³ FLOPS? │ │ Standard AI system rules │
└─────────────────────────┘ │ may still apply │
│ └─────────────────────────────┘
┌─────────┴─────────┐
▼ ▼
[ YES ] [ NO ]
│ │
▼ ▼
┌────────────────┐ ┌──────────────────────────────────────┐
│ GPAI MODEL │ │ Evaluate other indicators: │
│ │ │ - Parameters ≥1B │
│ Check systemic │ │ - Self-supervised training │
│ risk threshold │ │ - Multiple output modalities │
└────────────────┘ │ - Wide task applicability │
│ │ │
▼ │ If multiple indicators → GPAI │
┌────────────────┐ │ If specialized → NOT GPAI │
│ ≥10²⁵ FLOPS? │ └──────────────────────────────────────┘
└────────────────┘

┌───┴───┐
▼ ▼
YES NO
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ SYSTEMIC RISK │ │ STANDARD GPAI │
│ GPAI MODEL │ │ MODEL │
│ │ │ │
│ Articles 51-55 │ │ Article 53 │
│ + Safety Chapter│ │ Transparency │
│ + AI Office │ │ Requirements │
└─────────────────┘ └─────────────────┘

2.4 Exclusions from GPAI

Models are NOT GPAI if they are specialized for:

  • Speech transcription only
  • Image upscaling only
  • Weather forecasting only
  • Gaming applications only
  • Other single-purpose tasks without general capability

3. Obligations by Role

3.1 GPAI Provider Obligations (Article 53)

All GPAI providers must:

#ObligationEvidence RequiredTimeline
1Maintain technical documentationModel Documentation FormBefore placement
2Provide information to downstream providersIntegration documentationOn request
3Establish copyright compliance policyrobots.txt policy, opt-out proceduresBefore placement
4Publish training data summaryPublic summary using AI Office templateBefore placement

3.2 Systemic Risk GPAI Additional Obligations (Article 55)

GPAI with systemic risk must additionally:

#ObligationEvidence RequiredTimeline
5Perform model evaluationEvaluation methodology and resultsOngoing
6Assess and mitigate systemic risksRisk assessment documentationOngoing
7Track and report serious incidentsIncident log, AI Office reportsWithin deadlines
8Ensure adequate cybersecuritySecurity framework documentationOngoing
9Notify AI Office of systemic risk modelsNotification within 2 weeksBefore/at threshold

3.3 GPAI Deployer Obligations

Organizations using GPAI in their systems:

ObligationSMB ApproachEnterprise Approach
Inventory GPAI usageSpreadsheet trackingGRC system integration
Verify provider complianceRequest documentationContractual requirements
Implement transparencyUser disclosuresAI labeling system
Monitor for incidentsManual reviewAutomated monitoring
Maintain documentationSimplified System CardFull AI-BOM

4. Technical Documentation Requirements

4.1 Model Documentation Form

The GPAI Code of Practice specifies a standardized Model Documentation Form:

Section A: General Information

FieldRequired Content
Model NameOfficial model identifier
VersionVersion number and date
ProviderLegal entity name and contact
Release DateDate of market placement
Model TypeArchitecture category

Section B: Technical Specifications

FieldRequired Content
ArchitectureModel architecture description
ParametersTotal parameter count
Training ComputeTotal FLOPS used for training
Hardware UsedGPU/TPU types and quantities
Training DurationTotal training time
Energy ConsumptionEstimated energy use (kWh)

Section C: Training Data

FieldRequired Content
Data SourcesCategories of training data
Data VolumeApproximate size of training corpus
LanguagesLanguages represented
Temporal RangeDate range of source data
Synthetic DataProportion of synthetic data

Section D: Capabilities and Limitations

FieldRequired Content
Intended UsesDesigned applications
Known LimitationsPerformance boundaries
Prohibited UsesUses the provider prohibits
Evaluation ResultsKey benchmark results

4.2 Training Data Summary Template

Required public disclosure (Article 53(1)(d)):

# Training Data Summary for [Model Name]

## 1. General Description
[High-level description of the training data used]

## 2. Data Categories
- Text: [Yes/No, approximate proportion]
- Code: [Yes/No, approximate proportion]
- Images: [Yes/No, approximate proportion]
- Audio: [Yes/No, approximate proportion]
- Video: [Yes/No, approximate proportion]
- Structured Data: [Yes/No, approximate proportion]

## 3. Data Sources
- Web Crawl Data: [Description, approximate proportion]
- Licensed Datasets: [Categories, no specific names required]
- Proprietary Data: [Categories]
- Synthetic Data: [Description, approximate proportion]
- User-Generated Content: [If applicable]

## 4. Geographic and Linguistic Coverage
- Primary Languages: [List]
- Geographic Representation: [Regions well-represented]
- Known Gaps: [Areas with limited coverage]

## 5. Data Collection Period
- Start Date: [Approximate]
- End Date: [Approximate]
- Knowledge Cutoff: 2025-04-01

## 6. Data Processing
- Filtering Methods: [High-level description]
- Quality Controls: [High-level description]
- Deduplication: [Yes/No]

## 7. Copyright Compliance
- robots.txt Compliance: [Yes/No, methodology]
- Opt-Out Mechanism: [Description if available]
- Copyright Policy: [Reference to policy]

## 8. Notable Exclusions
[Categories of content explicitly excluded]

## 9. Limitations
[Known limitations of the training data]

---
Publication Date: 2026-01-15
Provider: [Legal entity name]

5.1 Requirements (EU AI Act Article 53(1)(c))

GPAI providers must establish policies to comply with Union copyright law, including:

  • Directive (EU) 2019/790 (Copyright in the Digital Single Market)
  • Identification and respect of rights reservations (Article 4)
  • Text and data mining exceptions (Articles 3 and 4)

5.2 Implementation Requirements

RequirementImplementationEvidence
robots.txt ComplianceRespect website exclusions during trainingCrawler logs, exclusion lists
Opt-Out MechanismMachine-readable reservation identificationTechnical documentation
Rights Holder RequestsProcess for handling opt-out requestsRequest handling procedure
DocumentationMaintain records of compliance effortsCompliance documentation
  • Establish written copyright compliance policy
  • Implement robots.txt compliance in data collection
  • Create machine-readable opt-out mechanism
  • Document sources and collection methodology
  • Establish process for rights holder requests
  • Maintain evidence of compliance efforts
  • Regular audit of compliance procedures

6. Systemic Risk Assessment

6.1 Systemic Risk Indicators

Models presumed to have systemic risk if:

  • Training compute ≥ 10²⁵ FLOPS
  • Designated by Commission decision

Additional indicators to evaluate:

  • High-impact capabilities across multiple domains
  • Potential for significant negative effects on public health, safety, or fundamental rights
  • Wide deployment potential
  • Ability to generate content at scale

6.2 Risk Assessment Framework

Risk CategoryAssessment QuestionsMitigation Required
SafetyCould outputs cause physical harm?Safety filters, content policies
SecurityCould the model assist cyber attacks?Red teaming, capability restrictions
Fundamental RightsCould outputs discriminate or violate rights?Bias testing, fairness evaluation
Democratic ProcessesCould outputs influence elections/democracy?Misinformation controls
Public HealthCould outputs provide dangerous health advice?Medical content filters
EnvironmentWhat is the environmental impact?Energy efficiency measures

6.3 Notification Requirements

Systemic Risk GPAI providers must notify AI Office:

TriggerTimelineMethod
Reasonably foreseeing 10²⁵ FLOPS thresholdWithin 2 weeksEU SEND platform
Reaching 10²⁵ FLOPS thresholdWithin 2 weeksEU SEND platform
Serious incidentWithout undue delayEU SEND platform
Commission designationWithin 2 weeks of designationEU SEND platform

6.4 Safety and Security Model Report Template

Required for Systemic Risk GPAI providers (Code of Practice Measures 1.4, 7.7)

This template documents the safety and security framework for systemic risk GPAI models and serves as the basis for AI Office submissions.

# Safety and Security Model Report

## Report Metadata

| **Field** | **Value** |
|-----------|----------|
| Report ID | [SSMR-YYYY-NNN] |
| Model Name | [Official model identifier] |
| Model Version | [Version number] |
| Provider | [Legal entity name] |
| Report Date | [YYYY-MM-DD] |
| Report Version | [e.g., 1.0] |
| Classification | [ ] Initial Report [ ] Annual Update [ ] Post-Incident Update |
| Submission Status | [ ] Draft [ ] Submitted to AI Office [ ] Accepted |

---

## 1. Model Identification and Classification

### 1.1 Systemic Risk Classification

| **Criterion** | **Value** | **Evidence** |
|--------------|----------|--------------|
| Training Compute | [X.XX × 10²⁵ FLOPS] | [Training logs reference] |
| Parameter Count | [XXB parameters] | [Model card reference] |
| Commission Designation | [ ] Yes [ ] No | [Designation reference if applicable] |
| Designation Date | [YYYY-MM-DD or N/A] | |

### 1.2 Model Capabilities

| **Capability Domain** | **Assessment** | **Risk Level** |
|----------------------|----------------|----------------|
| Text Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Code Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Image Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Audio Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Video Generation | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Reasoning/Planning | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |
| Tool Use/Agency | [ ] Present [ ] Absent | [ ] Low [ ] Medium [ ] High |

---

## 2. Safety Framework Summary

### 2.1 Safety Governance

| **Element** | **Description** |
|------------|-----------------|
| Safety Team | [Team structure and size] |
| Safety Lead | [Name, title] |
| Reporting Line | [Reports to whom] |
| Safety Board | [Composition if applicable] |
| Safety Budget | [Annual allocation range] |

### 2.2 Safety Policies

| **Policy** | **Version** | **Last Updated** | **Summary** |
|-----------|------------|------------------|-------------|
| Responsible AI Policy | | | |
| Model Release Policy | | | |
| Incident Response Policy | | | |
| Red Team Policy | | | |

### 2.3 Pre-Deployment Safety Measures

| **Measure** | **Implementation** | **Status** |
|------------|-------------------|-----------|
| Safety Training Data Curation | [Description] | [ ] Complete |
| RLHF/Constitutional AI | [Description] | [ ] Complete |
| Output Filtering | [Description] | [ ] Complete |
| Refusal Training | [Description] | [ ] Complete |
| Capability Limitations | [Description] | [ ] Complete |

### 2.4 Post-Deployment Safety Measures

| **Measure** | **Implementation** | **Status** |
|------------|-------------------|-----------|
| Usage Monitoring | [Description] | [ ] Active |
| Abuse Detection | [Description] | [ ] Active |
| User Reporting Mechanism | [Description] | [ ] Active |
| Rapid Response Capability | [Description] | [ ] Active |
| Model Update Mechanism | [Description] | [ ] Active |

---

## 3. Security Measures

### 3.1 Model Security

| **Control** | **Implementation** | **Verification** |
|------------|-------------------|-----------------|
| Model Weights Protection | [Encryption, access controls] | [Audit date] |
| Inference API Security | [Authentication, rate limiting] | [Pentest date] |
| Model Extraction Defense | [Technical measures] | [Assessment date] |
| Prompt Injection Defense | [Technical measures] | [Test date] |
| Jailbreak Mitigation | [Technical measures] | [Test date] |

### 3.2 Infrastructure Security

| **Control** | **Standard** | **Status** |
|------------|-------------|-----------|
| ISO 27001 Certification | [ ] Certified [ ] In Progress | [Cert date/ETA] |
| SOC 2 Type II | [ ] Certified [ ] In Progress | [Report date] |
| Cloud Security | [CSP certifications] | [Details] |
| Data Center Security | [Physical security measures] | [Audit date] |

### 3.3 Supply Chain Security

| **Component** | **Security Measure** | **Verification** |
|--------------|---------------------|-----------------|
| Training Data Pipeline | [Access controls, integrity checks] | |
| Dependency Management | [SBOM, vulnerability scanning] | |
| Model Registry | [Signing, verification] | |
| Deployment Pipeline | [CI/CD security] | |

---

## 4. Model Evaluation Results

### 4.1 Safety Benchmarks

| **Benchmark** | **Score** | **Date** | **Threshold** |
|--------------|----------|---------|---------------|
| TruthfulQA | | | |
| BBQ (Bias Benchmark) | | | |
| ToxiGen | | | |
| RealToxicityPrompts | | | |
| HELM Safety Metrics | | | |
| [Internal Safety Benchmark] | | | |

### 4.2 Capability Evaluations

| **Domain** | **Benchmark** | **Performance** | **Concern Level** |
|-----------|--------------|-----------------|------------------|
| CBRN Knowledge | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Cyber Capabilities | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Persuasion/Manipulation | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Deception Capability | [Internal eval] | | [ ] Low [ ] Medium [ ] High |
| Autonomous Action | [Internal eval] | | [ ] Low [ ] Medium [ ] High |

---

## 5. Red Teaming Summary

### 5.1 Red Team Program

| **Element** | **Description** |
|------------|-----------------|
| Team Composition | [Internal/external, size] |
| Methodology | [Approach description] |
| Frequency | [Ongoing/periodic] |
| Scope | [Domains covered] |

### 5.2 Red Team Findings Summary

| **Category** | **Findings** | **Severity** | **Remediation Status** |
|-------------|-------------|-------------|----------------------|
| Jailbreaks | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Harmful Content Generation | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Privacy Violations | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Security Vulnerabilities | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |
| Bias/Discrimination | [Count, summary] | [H/M/L] | [ ] Remediated [ ] Mitigated [ ] Accepted |

### 5.3 External Red Team Engagements

| **Engagement** | **Firm/Team** | **Date** | **Scope** | **Report Available** |
|---------------|--------------|---------|----------|---------------------|
| | | | | [ ] Yes [ ] No |
| | | | | [ ] Yes [ ] No |

---

## 6. Risk Mitigation Measures

### 6.1 Identified Systemic Risks

| **Risk ID** | **Risk Description** | **Likelihood** | **Impact** | **Overall Rating** |
|------------|---------------------|---------------|-----------|-------------------|
| SR-001 | [Description] | [H/M/L] | [H/M/L] | [Critical/High/Medium/Low] |
| SR-002 | [Description] | [H/M/L] | [H/M/L] | [Critical/High/Medium/Low] |

### 6.2 Mitigation Controls

| **Risk ID** | **Mitigation Measure** | **Implementation Status** | **Effectiveness** |
|------------|----------------------|--------------------------|------------------|
| SR-001 | [Control description] | [ ] Implemented [ ] Planned | [ ] Verified [ ] Pending |
| SR-002 | [Control description] | [ ] Implemented [ ] Planned | [ ] Verified [ ] Pending |

### 6.3 Residual Risk Assessment

| **Risk ID** | **Residual Risk** | **Acceptance Authority** | **Review Date** |
|------------|------------------|-------------------------|----------------|
| SR-001 | [H/M/L] | [Role/Name] | [YYYY-MM-DD] |
| SR-002 | [H/M/L] | [Role/Name] | [YYYY-MM-DD] |

---

## 7. Incident Tracking

### 7.1 Incident Summary (Reporting Period)

| **Metric** | **Value** |
|-----------|----------|
| Total Incidents Reported | |
| Serious Incidents (AI Office notified) | |
| Safety-Related Incidents | |
| Security-Related Incidents | |
| Mean Time to Detect | |
| Mean Time to Respond | |

### 7.2 Serious Incidents (If Any)

| **Incident ID** | **Date** | **Category** | **AI Office Notification** | **Status** |
|----------------|---------|-------------|---------------------------|-----------|
| | | | [Date notified] | [ ] Open [ ] Closed |

### 7.3 Lessons Learned

| **Incident Pattern** | **Root Cause** | **Improvement Implemented** |
|---------------------|---------------|---------------------------|
| | | |

---

## 8. Continuous Improvement

### 8.1 Safety Roadmap

| **Initiative** | **Target Date** | **Status** |
|---------------|----------------|-----------|
| [Safety improvement 1] | | [ ] Planned [ ] In Progress [ ] Complete |
| [Safety improvement 2] | | [ ] Planned [ ] In Progress [ ] Complete |

### 8.2 Upcoming Evaluations

| **Evaluation** | **Planned Date** | **Scope** |
|---------------|-----------------|----------|
| | | |

---

## 9. AI Office Submission Information

### 9.1 Submission Record

| **Submission** | **Date** | **Platform** | **Receipt ID** |
|---------------|---------|-------------|---------------|
| Initial Notification | | EU SEND | |
| This Report | | EU SEND | |

### 9.2 Contact Information

| **Role** | **Name** | **Email** | **Phone** |
|---------|---------|----------|----------|
| Primary Contact | | | |
| Safety Lead | | | |
| Legal Representative | | | |

---

## 10. Attestation

I hereby attest that the information provided in this Safety and Security Model Report is accurate and complete to the best of my knowledge.

| **Role** | **Name** | **Signature** | **Date** |
|---------|---------|--------------|---------|
| CEO/Authorized Representative | | | |
| Chief Safety Officer | | | |
| CISO | | | |

---

Report Classification: [Confidential - AI Office Submission]
Document Control: [Version-controlled, retention 10+ years]

7. GPAI Code of Practice Alignment

7.1 Code Structure

The GPAI Code of Practice (July 2025) provides three chapters:

ChapterApplies ToKey Measures
TransparencyAll GPAIModel documentation, downstream info
CopyrightAll GPAIrobots.txt, opt-out, policy
Safety & SecuritySystemic Risk onlyEvaluation, risk management

7.2 Compliance Pathways

PathwayDescriptionBenefit
Code SignatorySign and follow Code of PracticePresumption of compliance
Independent DemonstrationDocument alternative complianceRequires detailed justification
HybridFollow Code + additional measuresEnhanced compliance posture

7.3 Key Code Measures

Transparency Chapter (All GPAI):

  • Measure 1.1: Complete Model Documentation Form
  • Measure 1.2: Provide downstream provider information
  • Measure 1.3: Maintain documentation updates
  • Measure 1.4: Submit Safety and Security Model Report (systemic only) - See Section 6.4 Template

Copyright Chapter (All GPAI):

  • Measure 2.1: Establish copyright compliance policy
  • Measure 2.2: Implement robots.txt compliance
  • Measure 2.3: Create opt-out mechanism
  • Measure 2.4: Maintain compliance documentation

Safety & Security Chapter (Systemic Risk only):

  • Measure 7.1: Establish Safety and Security Framework - See Section 6.4 Template
  • Measure 7.2: Conduct model evaluations - See Section 6.4, Part 4
  • Measure 7.3: Perform red teaming - See Section 6.4, Part 5
  • Measure 7.4: Track and report incidents - See Section 6.4, Part 7
  • Measure 7.5: Implement cybersecurity measures - See Section 6.4, Part 3
  • Measure 7.7: Submit Model Report to AI Office - See Section 6.4, Part 9

8. SMB Implementation Guide

8.1 SMB-Specific Considerations

Small and medium businesses face unique challenges with GPAI compliance. This section provides proportionate implementation guidance.

8.2 SMB Compliance Matrix

ObligationEnterprise ApproachSMB Approach
Model DocumentationFull documentation teamTemplate-based, owner-maintained
Training Data SummaryDetailed tracking systemSimplified template
Copyright ComplianceDedicated legal reviewPolicy template + spot checks
Incident ReportingAutomated monitoringManual review process
AI-BOM MaintenanceGRC system integrationSpreadsheet/basic tooling

8.3 SMB Quick-Start Checklist

Week 1-2: Assessment

  • Identify all GPAI models in use
  • Classify each as Standard or Systemic Risk
  • Identify your role (Provider, Deployer, Integrator)

Week 3-4: Documentation

  • Complete Model Documentation Form for each GPAI
  • Create Training Data Summary if provider
  • Establish copyright compliance policy

Week 5-6: Implementation

  • Implement required technical measures
  • Set up basic monitoring
  • Train relevant staff

Ongoing:

  • Quarterly documentation review
  • Incident monitoring and reporting
  • Compliance updates tracking

9. Enterprise Implementation Guide

9.1 Governance Structure

RoleGPAI Responsibilities
AI Risk OfficerGPAI compliance program ownership
Legal CounselCopyright compliance, contract review
Security TeamSystemic risk assessment, red teaming
Data GovernanceTraining data documentation
AI Governance BoardGPAI deployment approvals

9.2 Enterprise Compliance Program

Phase 1: Foundation (Month 1)

  • Establish GPAI inventory
  • Classify all GPAI by risk level
  • Assign ownership for each GPAI system

Phase 2: Documentation (Month 2-3)

  • Complete Model Documentation Forms
  • Create Training Data Summaries
  • Establish AI-BOM for each system

Phase 3: Controls (Month 4-6)

  • Implement technical controls
  • Establish monitoring systems
  • Create incident response procedures

Phase 4: Assurance (Ongoing)

  • Regular compliance audits
  • Documentation updates
  • Regulatory monitoring

9.3 Integration with Existing Frameworks

FrameworkGPAI Integration Point
NIST AI RMFMAP function - GPAI classification
ISO/IEC 42001Clause 8 - AI operations
ISO/IEC 27001A.8 - Asset management
SOC 2CC6 - System operations

10. Audit and Evidence Requirements

10.1 Required Evidence Repository

Evidence TypeRetention PeriodFormat
Model Documentation FormLife of model + 10 yearsPDF, version controlled
Training Data SummaryLife of model + 10 yearsPublic publication
Copyright Compliance Records10 yearsInternal documentation
Incident Reports10 yearsStructured records
AI Office Notifications10 yearsEU SEND receipts

10.2 Audit Checklist

For AI Office Audit Readiness:

  • Model Documentation Form complete and current
  • Training Data Summary published
  • Copyright compliance policy documented
  • Evidence of robots.txt compliance
  • Downstream provider information available
  • (Systemic Risk) Safety and Security Framework documented
  • (Systemic Risk) Model evaluations conducted
  • (Systemic Risk) Red teaming results available
  • (Systemic Risk) Incident log maintained

11. Document Control

11.1 Version History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release
1.12026-01-16AI Governance OfficeAdded Safety and Security Model Report Template (Section 6.4)

11.2 Approvals

RoleNameDate
AI Risk Officer
Legal Counsel
CISO

Appendix A: GPAI Regulatory Timeline

DateEventAction Required
Aug 1, 2024EU AI Act enters into forceAwareness
Feb 2, 2025Prohibited practices bannedReview AI uses
Aug 2, 2025GPAI obligations in forceFull compliance (new models)
Aug 2, 2026Commission enforcement activeAudit readiness
Aug 2, 2027Legacy GPAI complianceRetrofit existing models

Appendix B: EU SEND Platform Usage

The EU SEND platform is used for submitting GPAI-related documents to the AI Office:

Documents to submit:

  • Systemic risk notifications (Article 51(2), 52(2))
  • Reassessment requests (Article 52(5))
  • Serious incident reports (Article 55(1)(c))
  • Safety and Security Framework (Code of Practice Measure 1.4)
  • Model Report (Code of Practice Measure 7.7)

Access: https://send.ec.europa.eu (requires EU Login)


Classification: Internal
Review Frequency: Quarterly (during initial implementation), then Annual


CODITECT AI Risk Management Framework

Document ID: AI-RMF-14 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel