AI Risk Classification & Tiering Matrix
The Logic Engine for Governance
Document Control
| Field | Details |
|---|---|
| Document Type | Standard / Guidance |
| Purpose | Systematically categorize AI use cases into four risk tiers |
| Applies To | All AI Intake and Registration processes |
| Version | v2.0 |
| Framework Alignment | NIST AI RMF 2.0, EU AI Act, ISO/IEC 42001 |
1. Risk Tiers at a Glance
| Tier | Name | Description | Governance Philosophy | EU AI Act Mapping |
|---|---|---|---|---|
| Tier 1 | Low Risk | Minimal impact; internal-only or non-material tasks | "Register & Go" – Focus on inventory and basic transparency | Minimal Risk |
| Tier 2 | Medium Risk | Moderate impact; operational support; human-in-the-loop | "Trust but Verify" – Standardized controls and internal review | Limited Risk / Transparency |
| Tier 3 | High Risk | Material impact; affects rights, safety, or critical operations | "Gatekeeper Approval" – Rigorous testing, legal review, board sign-off | High Risk (Annex III) |
| Tier 4 | Critical Risk | Existential impact; safety-critical; autonomous decision-making | "Executive Mandate" – Intensive scrutiny, external validation, executive voting | Potentially Prohibited / Systemic Risk GPAI |
2. Classification Logic (Scoring Model)
A system is classified based on the highest level it triggers across any of the following dimensions:
2.1 Scoring Dimensions
| Dimension | Score 1 (Low) | Score 2 (Medium) | Score 3 (High) | Score 4 (Critical) |
|---|---|---|---|---|
| Data Sensitivity | Public data, non-sensitive internal | Internal confidential, limited PII | Sensitive PII, SPI (health/financial) | Highly classified, biometric, MNPI |
| Autonomy Level | Informational only, human must act | Human-in-the-loop (reviews before action) | Human-on-the-loop (monitors, may intervene) | Human-out-of-the-loop (fully autonomous) |
| Impact Scope | Minor inconvenience | Operational friction, minor financial loss | Regulatory fines, discrimination, material loss | Physical harm, massive loss, severe sanctions |
| Scale | <100 users, internal pilot | 100-10,000 users, single department | 10,000-1M users, external customers | >1M users, critical infrastructure |
Final Tier = Maximum score across all dimensions
2.2 Classification Formula
Tier = MAX(Data_Sensitivity_Score, Autonomy_Score, Impact_Score, Scale_Score)
3. Tier Definitions and Examples
Tier 1: Low Risk (Information & Assistance)
Characteristics:
- Data: Public data or non-sensitive internal data (no PII/CSI)
- Autonomy: Informational only; human must actively use the output
- Impact: Failure results in minor inconvenience; no financial or regulatory impact
- Scale: Internal users only, limited deployment
Common Examples:
- Internal "Helpdesk" chatbot for IT FAQs
- Spam filters (standard)
- Meeting summarization tools (non-sensitive meetings)
- Code auto-complete assistants (non-critical codebases)
- Document formatting assistants
- Internal knowledge search
Tier 2: Medium Risk (Operational Support & Optimization)
Characteristics:
- Data: Internal confidential data, limited PII (employee names), aggregated customer data
- Autonomy: Recommender systems where human reviews before action (Human-in-the-loop)
- Impact: Failure causes operational friction, minor financial loss (<$50k), limited internal confusion
- Scale: Department-wide or limited customer-facing
Common Examples:
- Customer service chatbots (routine queries) with human escalation
- Marketing copy generation tools
- Internal inventory forecasting models
- Resume sorting assistants (providing rankings, not rejections)
- Sentiment analysis for customer feedback
- Email drafting assistants
Tier 3: High Risk (Decisions & Rights)
Characteristics:
- Data: Sensitive PII, SPI (health, financial, biometric), material non-public information
- Autonomy: System makes decisions affecting user's experience/access, or Human-on-the-loop
- Impact: Failure results in regulatory fines, reputational damage, discrimination, material loss ($50k-$1M)
- Scale: External customers, significant user base
Common Examples:
- Credit/Loan eligibility scoring
- Fraud detection triggers that freeze accounts
- Hiring/Recruiting bots that filter candidates
- Generative AI producing external-facing content without manual review
- Legal contract analysis tools
- Medical triage assistants
- Educational assessment tools
- Insurance underwriting assistants
Tier 4: Critical Risk (Safety & Autonomy)
Characteristics:
- Data: Highly classified secrets, critical infrastructure telemetry, mass-scale biometric data
- Autonomy: Fully autonomous execution (Human-out-of-the-loop) in high-stakes environments
- Impact: Failure results in physical harm, massive financial loss (>$1M), severe regulatory sanctions, enterprise-wide crisis
- Scale: Critical infrastructure, mass population impact
Common Examples:
- Autonomous vehicle/machinery control systems
- Algorithmic trading (high frequency/autonomous execution)
- Medical diagnosis tools acting without physician review
- Cybersecurity automated response (active countermeasures)
- Critical infrastructure control systems
- Autonomous weapons or security systems
- Real-time biometric identification systems
- Social scoring systems
4. EU AI Act Automatic High-Risk Triggers
The following AI applications are automatically classified as High Risk (Tier 3) per EU AI Act Annex III, regardless of scoring:
| Domain | Use Case |
|---|---|
| Biometrics | Remote biometric identification (except for targeted searches with safeguards) |
| Critical Infrastructure | Safety components in management of water, gas, heating, electricity |
| Education | Determining access to education, evaluating learning outcomes, proctoring |
| Employment | Recruitment, screening, promotion decisions, task allocation, termination |
| Essential Services | Creditworthiness, credit scoring, risk assessment for life/health insurance |
| Law Enforcement | Individual risk assessment, polygraphs, emotion detection, profiling |
| Migration | Risk assessment for irregular migration, visa applications, asylum |
| Justice | Researching legal facts, applying law, dispute resolution |
5. GPAI Model Classification (EU AI Act)
5.1 GPAI Model Identification
A model qualifies as GPAI if:
- Trained using more than 10²³ FLOPS, AND
- Capable of generating language, text-to-image, or text-to-video outputs, AND
- Displays significant generality across tasks
5.2 Systemic Risk GPAI
GPAI models are presumed to have systemic risk if:
- Training compute ≥ 10²⁵ FLOPS, OR
- Commission designates based on high-impact capabilities
5.3 GPAI Tier Mapping
| GPAI Type | Tier | Additional Requirements |
|---|---|---|
| Standard GPAI | Minimum Tier 2 | Technical documentation, copyright policy, training data summary |
| Systemic Risk GPAI | Tier 4 | Above + risk assessment, red teaming, incident reporting, cybersecurity |
6. Agentic AI Classification Modifiers
Agentic AI systems receive automatic tier elevation based on autonomy characteristics:
| Characteristic | Tier Modifier |
|---|---|
| Single agent with bounded actions | +0 (use base scoring) |
| Single agent with tool access | +1 tier minimum |
| Multi-agent system | +1 tier minimum |
| Autonomous action without approval | +2 tiers minimum |
| Self-modification capability | Automatic Tier 4 |
| External system write access | +1 tier minimum |
Example: A customer service agent (base Tier 2) with autonomous email sending (tool access) = Tier 3 minimum
7. Required Controls by Tier
Tier 1 (Low) Controls
| Control | Required |
|---|---|
| Inventory Registration | ✓ |
| Owner Assigned | ✓ |
| User Transparency (if applicable) | ✓ |
| Basic Documentation | ✓ |
Tier 2 (Medium) Controls
Includes all Tier 1 controls plus:
| Control | Required |
|---|---|
| Risk Assessment Form | ✓ |
| Data Lineage Documentation | ✓ |
| Basic Accuracy/Performance Testing | ✓ |
| Periodic Manual Review (monthly) | ✓ |
| Change Control Process | ✓ |
Tier 3 (High) Controls
Includes all Tier 2 controls plus:
| Control | Required |
|---|---|
| Bias/Fairness Testing | ✓ |
| Security Review / Red Teaming | ✓ |
| Legal/Privacy Review Sign-off | ✓ |
| Automated Monitoring (real-time) | ✓ |
| Incident Response Plan | ✓ |
| Explainability Documentation | ✓ |
| Human Oversight Mechanism | ✓ |
| Algorithmic Impact Assessment | ✓ |
| System Card (Model Card) | ✓ |
| EU AI Act Conformity Assessment (if applicable) | ✓ |
Tier 4 (Critical) Controls
Includes all Tier 3 controls plus:
| Control | Required |
|---|---|
| Independent Validation | ✓ |
| Executive (Board) Sign-off | ✓ |
| Kill Switch (tested) | ✓ |
| Adversarial Hardening | ✓ |
| External Audit | ✓ |
| Continuous Monitoring (24/7) | ✓ |
| Formal Incident Playbook | ✓ |
| Insurance/Liability Review | ✓ |
| Regulatory Notification (if required) | ✓ |
| Action Boundaries (for agentic AI) | ✓ |
8. Re-Classification Triggers
A system must be re-scored if any of the following occur:
| Trigger | Action Required |
|---|---|
| Data Change | Begins processing higher sensitivity data |
| Autonomy Change | Human oversight removed or reduced |
| Scale Change | Deployment expands significantly |
| Use Case Change | New application of existing model |
| Incident | Material incident occurs |
| Regulatory Change | New regulation affects classification |
| Vendor Change | Model provider changes |
| Performance Degradation | Significant drift detected |
9. Prohibited Uses (Automatic Rejection)
The following uses are prohibited and cannot be classified—they must be rejected:
| Prohibited Use | EU AI Act Reference |
|---|---|
| Social scoring by public/private entities | Article 5(1)(c) |
| Real-time remote biometric identification in public (except specific exemptions) | Article 5(1)(h) |
| Subliminal manipulation causing harm | Article 5(1)(a) |
| Exploitation of vulnerabilities (age, disability) | Article 5(1)(b) |
| Emotion recognition in workplace/education for adverse decisions | Article 5(1)(f) |
| Biometric categorization inferring sensitive attributes | Article 5(1)(g) |
| Untargeted scraping for facial recognition databases | Article 5(1)(e) |
| Predictive policing based solely on profiling | Article 5(1)(d) |
10. Quick Classification Checklist
Use this checklist for rapid initial assessment:
□ Does the system make or influence decisions affecting individuals' rights? → Tier 3+
□ Does the system process sensitive personal data (health, financial, biometric)? → Tier 3+
□ Is the system used in employment, credit, education, or law enforcement? → Tier 3 (EU AI Act High-Risk)
□ Does the system operate autonomously without human approval per action? → Tier 3+
□ Is this a GPAI model with systemic risk (≥10²⁵ FLOPS)? → Tier 4
□ Does the system have the ability to take actions in external systems? → +1 Tier
□ Is this a multi-agent system? → +1 Tier
□ Could failure result in physical harm? → Tier 4
□ Is this a prohibited use case under EU AI Act? → REJECT
Document History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
| 2.0 | 2026-01-15 | AI Governance Office | Added EU AI Act Annex III triggers, GPAI classification, agentic AI modifiers |
Next Step: Proceed to Artifact 4: AI Intake & Registration Form
CODITECT AI Risk Management Framework
Document ID: AI-RMF-03 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel