Skip to main content

AI Risk Classification & Tiering Matrix

The Logic Engine for Governance


Document Control

FieldDetails
Document TypeStandard / Guidance
PurposeSystematically categorize AI use cases into four risk tiers
Applies ToAll AI Intake and Registration processes
Versionv2.0
Framework AlignmentNIST AI RMF 2.0, EU AI Act, ISO/IEC 42001

1. Risk Tiers at a Glance

TierNameDescriptionGovernance PhilosophyEU AI Act Mapping
Tier 1Low RiskMinimal impact; internal-only or non-material tasks"Register & Go" – Focus on inventory and basic transparencyMinimal Risk
Tier 2Medium RiskModerate impact; operational support; human-in-the-loop"Trust but Verify" – Standardized controls and internal reviewLimited Risk / Transparency
Tier 3High RiskMaterial impact; affects rights, safety, or critical operations"Gatekeeper Approval" – Rigorous testing, legal review, board sign-offHigh Risk (Annex III)
Tier 4Critical RiskExistential impact; safety-critical; autonomous decision-making"Executive Mandate" – Intensive scrutiny, external validation, executive votingPotentially Prohibited / Systemic Risk GPAI

2. Classification Logic (Scoring Model)

A system is classified based on the highest level it triggers across any of the following dimensions:

2.1 Scoring Dimensions

DimensionScore 1 (Low)Score 2 (Medium)Score 3 (High)Score 4 (Critical)
Data SensitivityPublic data, non-sensitive internalInternal confidential, limited PIISensitive PII, SPI (health/financial)Highly classified, biometric, MNPI
Autonomy LevelInformational only, human must actHuman-in-the-loop (reviews before action)Human-on-the-loop (monitors, may intervene)Human-out-of-the-loop (fully autonomous)
Impact ScopeMinor inconvenienceOperational friction, minor financial lossRegulatory fines, discrimination, material lossPhysical harm, massive loss, severe sanctions
Scale<100 users, internal pilot100-10,000 users, single department10,000-1M users, external customers>1M users, critical infrastructure

Final Tier = Maximum score across all dimensions

2.2 Classification Formula

Tier = MAX(Data_Sensitivity_Score, Autonomy_Score, Impact_Score, Scale_Score)

3. Tier Definitions and Examples

Tier 1: Low Risk (Information & Assistance)

Characteristics:

  • Data: Public data or non-sensitive internal data (no PII/CSI)
  • Autonomy: Informational only; human must actively use the output
  • Impact: Failure results in minor inconvenience; no financial or regulatory impact
  • Scale: Internal users only, limited deployment

Common Examples:

  • Internal "Helpdesk" chatbot for IT FAQs
  • Spam filters (standard)
  • Meeting summarization tools (non-sensitive meetings)
  • Code auto-complete assistants (non-critical codebases)
  • Document formatting assistants
  • Internal knowledge search

Tier 2: Medium Risk (Operational Support & Optimization)

Characteristics:

  • Data: Internal confidential data, limited PII (employee names), aggregated customer data
  • Autonomy: Recommender systems where human reviews before action (Human-in-the-loop)
  • Impact: Failure causes operational friction, minor financial loss (<$50k), limited internal confusion
  • Scale: Department-wide or limited customer-facing

Common Examples:

  • Customer service chatbots (routine queries) with human escalation
  • Marketing copy generation tools
  • Internal inventory forecasting models
  • Resume sorting assistants (providing rankings, not rejections)
  • Sentiment analysis for customer feedback
  • Email drafting assistants

Tier 3: High Risk (Decisions & Rights)

Characteristics:

  • Data: Sensitive PII, SPI (health, financial, biometric), material non-public information
  • Autonomy: System makes decisions affecting user's experience/access, or Human-on-the-loop
  • Impact: Failure results in regulatory fines, reputational damage, discrimination, material loss ($50k-$1M)
  • Scale: External customers, significant user base

Common Examples:

  • Credit/Loan eligibility scoring
  • Fraud detection triggers that freeze accounts
  • Hiring/Recruiting bots that filter candidates
  • Generative AI producing external-facing content without manual review
  • Legal contract analysis tools
  • Medical triage assistants
  • Educational assessment tools
  • Insurance underwriting assistants

Tier 4: Critical Risk (Safety & Autonomy)

Characteristics:

  • Data: Highly classified secrets, critical infrastructure telemetry, mass-scale biometric data
  • Autonomy: Fully autonomous execution (Human-out-of-the-loop) in high-stakes environments
  • Impact: Failure results in physical harm, massive financial loss (>$1M), severe regulatory sanctions, enterprise-wide crisis
  • Scale: Critical infrastructure, mass population impact

Common Examples:

  • Autonomous vehicle/machinery control systems
  • Algorithmic trading (high frequency/autonomous execution)
  • Medical diagnosis tools acting without physician review
  • Cybersecurity automated response (active countermeasures)
  • Critical infrastructure control systems
  • Autonomous weapons or security systems
  • Real-time biometric identification systems
  • Social scoring systems

4. EU AI Act Automatic High-Risk Triggers

The following AI applications are automatically classified as High Risk (Tier 3) per EU AI Act Annex III, regardless of scoring:

DomainUse Case
BiometricsRemote biometric identification (except for targeted searches with safeguards)
Critical InfrastructureSafety components in management of water, gas, heating, electricity
EducationDetermining access to education, evaluating learning outcomes, proctoring
EmploymentRecruitment, screening, promotion decisions, task allocation, termination
Essential ServicesCreditworthiness, credit scoring, risk assessment for life/health insurance
Law EnforcementIndividual risk assessment, polygraphs, emotion detection, profiling
MigrationRisk assessment for irregular migration, visa applications, asylum
JusticeResearching legal facts, applying law, dispute resolution

5. GPAI Model Classification (EU AI Act)

5.1 GPAI Model Identification

A model qualifies as GPAI if:

  • Trained using more than 10²³ FLOPS, AND
  • Capable of generating language, text-to-image, or text-to-video outputs, AND
  • Displays significant generality across tasks

5.2 Systemic Risk GPAI

GPAI models are presumed to have systemic risk if:

  • Training compute ≥ 10²⁵ FLOPS, OR
  • Commission designates based on high-impact capabilities

5.3 GPAI Tier Mapping

GPAI TypeTierAdditional Requirements
Standard GPAIMinimum Tier 2Technical documentation, copyright policy, training data summary
Systemic Risk GPAITier 4Above + risk assessment, red teaming, incident reporting, cybersecurity

6. Agentic AI Classification Modifiers

Agentic AI systems receive automatic tier elevation based on autonomy characteristics:

CharacteristicTier Modifier
Single agent with bounded actions+0 (use base scoring)
Single agent with tool access+1 tier minimum
Multi-agent system+1 tier minimum
Autonomous action without approval+2 tiers minimum
Self-modification capabilityAutomatic Tier 4
External system write access+1 tier minimum

Example: A customer service agent (base Tier 2) with autonomous email sending (tool access) = Tier 3 minimum


7. Required Controls by Tier

Tier 1 (Low) Controls

ControlRequired
Inventory Registration
Owner Assigned
User Transparency (if applicable)
Basic Documentation

Tier 2 (Medium) Controls

Includes all Tier 1 controls plus:

ControlRequired
Risk Assessment Form
Data Lineage Documentation
Basic Accuracy/Performance Testing
Periodic Manual Review (monthly)
Change Control Process

Tier 3 (High) Controls

Includes all Tier 2 controls plus:

ControlRequired
Bias/Fairness Testing
Security Review / Red Teaming
Legal/Privacy Review Sign-off
Automated Monitoring (real-time)
Incident Response Plan
Explainability Documentation
Human Oversight Mechanism
Algorithmic Impact Assessment
System Card (Model Card)
EU AI Act Conformity Assessment (if applicable)

Tier 4 (Critical) Controls

Includes all Tier 3 controls plus:

ControlRequired
Independent Validation
Executive (Board) Sign-off
Kill Switch (tested)
Adversarial Hardening
External Audit
Continuous Monitoring (24/7)
Formal Incident Playbook
Insurance/Liability Review
Regulatory Notification (if required)
Action Boundaries (for agentic AI)

8. Re-Classification Triggers

A system must be re-scored if any of the following occur:

TriggerAction Required
Data ChangeBegins processing higher sensitivity data
Autonomy ChangeHuman oversight removed or reduced
Scale ChangeDeployment expands significantly
Use Case ChangeNew application of existing model
IncidentMaterial incident occurs
Regulatory ChangeNew regulation affects classification
Vendor ChangeModel provider changes
Performance DegradationSignificant drift detected

9. Prohibited Uses (Automatic Rejection)

The following uses are prohibited and cannot be classified—they must be rejected:

Prohibited UseEU AI Act Reference
Social scoring by public/private entitiesArticle 5(1)(c)
Real-time remote biometric identification in public (except specific exemptions)Article 5(1)(h)
Subliminal manipulation causing harmArticle 5(1)(a)
Exploitation of vulnerabilities (age, disability)Article 5(1)(b)
Emotion recognition in workplace/education for adverse decisionsArticle 5(1)(f)
Biometric categorization inferring sensitive attributesArticle 5(1)(g)
Untargeted scraping for facial recognition databasesArticle 5(1)(e)
Predictive policing based solely on profilingArticle 5(1)(d)

10. Quick Classification Checklist

Use this checklist for rapid initial assessment:

□ Does the system make or influence decisions affecting individuals' rights? → Tier 3+
□ Does the system process sensitive personal data (health, financial, biometric)? → Tier 3+
□ Is the system used in employment, credit, education, or law enforcement? → Tier 3 (EU AI Act High-Risk)
□ Does the system operate autonomously without human approval per action? → Tier 3+
□ Is this a GPAI model with systemic risk (≥10²⁵ FLOPS)? → Tier 4
□ Does the system have the ability to take actions in external systems? → +1 Tier
□ Is this a multi-agent system? → +1 Tier
□ Could failure result in physical harm? → Tier 4
□ Is this a prohibited use case under EU AI Act? → REJECT

Document History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release
2.02026-01-15AI Governance OfficeAdded EU AI Act Annex III triggers, GPAI classification, agentic AI modifiers

Next Step: Proceed to Artifact 4: AI Intake & Registration Form


CODITECT AI Risk Management Framework

Document ID: AI-RMF-03 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel