Skip to main content

AI Governance Operating Model

Enterprise Framework | NIST AI RMF 2.0 & EU AI Act Aligned


Document Control

FieldDetails
Document TypeOperating Model
OwnerEnterprise AI Governance Lead (AI Risk Officer)
Executive Sponsor[CRO / CIO / CDO / COO]
ApproversAI Governance Board; Legal; CISO; Privacy Officer; Internal Audit
Effective Date2026-01-15
Review CadenceAnnual minimum; ad hoc upon regulatory/major risk events
Versionv2.0
Framework AlignmentNIST AI RMF 2.0 (GOVERN, MAP, MEASURE, MANAGE), EU AI Act, ISO/IEC 42001

1. Purpose and Outcomes

1.1 Purpose

Establish a single enterprise operating model that defines how AI is governed across the organization—clarifying accountability, decision rights, processes, controls, and evidence to ensure AI is trusted, compliant, secure, and fit-for-purpose throughout its lifecycle.

1.2 Outcomes (What "Good" Looks Like)

  • Inventory: Every AI system is known, classified, and risk-tiered
  • Gated Assurance: AI initiatives pass standard gates before production use (including vendor AI)
  • Control Efficacy: AI controls are risk-based, repeatable, and auditable
  • Continuous Monitoring: Model behavior is monitored (drift, bias, security, misuse) and incidents are managed
  • Regulatory Readiness: Framework aligns with NIST AI RMF 2.0, EU AI Act, and ISO/IEC 42001
  • Safe Value: Business value is enabled without compromising safety, privacy, fairness, and security

2. Scope and Governance Perimeter

2.1 In Scope (Minimum)

CategoryExamples
Internal AI/MLTrained models, fine-tunes, classifiers, recommenders, forecasting
Generative AI/LLMsChatbots, copilots, summarization, content generation, code assistants
Agentic AIAutonomous agents, multi-agent systems, tool-using AI, workflow automation
Vendor-Embedded AISaaS features with AI/automation, decisioning engines
Material AutomationRisk scoring, eligibility, pricing, HR screening, fraud detection
Employee UsageCorporate use of AI tools (including "shadow AI" detection/containment)
GPAI ModelsGeneral-purpose AI models placed on EU market (per EU AI Act)

2.2 Out of Scope

ExclusionRationaleApproval Required
Standard IT automation scripts (deterministic logic)No ML/AI componentAI Governance Board
Non-AI analytics (basic BI/dashboards)No algorithmic learningAI Governance Board
Isolated R&D sandboxes (no production data)Experimental onlyAI Governance Board

2.3 Definition of "AI System" (Operational)

Any system that makes or materially supports a decision, prediction, recommendation, generation, or classification using statistical learning, machine learning, deep learning, LLMs, or similar techniques, including:

  • Vendor tools where the model is not directly accessible
  • Agentic systems that can perceive, reason, plan, and act autonomously
  • Multi-agent orchestrations that coordinate multiple AI components

3. Governance Model: Hybrid Federated

3.1 Model Type

Hybrid (Central Governance + Federated Execution)

ComponentResponsibility
CentralizedEnterprise policies, minimum controls, risk-tiering rules, exceptions, auditability, mandatory documentation, regulatory compliance, enterprise reporting
FederatedDomain teams execute delivery and controls locally with domain AI stewards and system/model owners

3.2 Why Hybrid

  • Scales across business units while preserving enterprise consistency
  • Enables fast product delivery with clear risk guardrails
  • Reduces gaps caused by tool/vendor diversity
  • Supports both agile development and regulatory compliance

3.3 Core Principle

Decision rights scale with risk. High/Critical risk AI requires centralized review; Low-risk AI is governed through standardized "policy-as-process" controls.


4. Governance Bodies and Accountability Chain

4.1 AI Governance Board (Strategic Oversight)

AttributeDetails
MandateEnterprise policy approval, risk appetite alignment, escalations, critical approvals
ChairExecutive Sponsor (or delegate)
QuorumChair + 50% of voting members
CadenceQuarterly (monthly during initial rollout)

Voting Members:

  • AI Governance Lead (AI Risk Officer)
  • Chief Legal Officer / General Counsel
  • Chief Information Security Officer (CISO)
  • Chief Privacy Officer / Data Protection Officer
  • Chief Data Officer (CDO)
  • Product/Engineering Leadership
  • HR Leadership (for workforce AI)
  • Internal Audit (observer or voting)

Key Decisions:

  • Approve AI governance policies and risk appetite
  • Approve/deny Critical Risk deployments and major exceptions
  • Require remediation plans for systemic risk
  • Set enterprise AI strategy and investment priorities

4.2 AI Risk Review Board (Tactical Gatekeeper)

AttributeDetails
MandateIntake triage, risk tiering confirmation, go/no-go at stage gates for Medium–Critical
ChairAI Governance Lead (or Model Risk Lead)
QuorumChair + Legal + Security + Privacy
CadenceWeekly (or as required by pipeline volume)

Standing Members:

  • Privacy Operations Lead
  • Cybersecurity Architect
  • Data Governance Lead
  • Legal Counsel (Product/IP)
  • Model Risk Management Lead
  • Compliance Officer

Key Decisions:

  • Confirm tiering and required evidence
  • Approve readiness at pre-prod and prod release gates
  • Trigger red teaming, independent validation, or enhanced monitoring
  • Review Algorithmic Impact Assessments

4.3 Domain AI Stewardship (Federated Execution)

Each Business Unit/Domain provides:

  • Domain AI Steward: Governance champion
  • AI System Owners / Model Owners: Technical accountability
  • AI Product Owner: Business accountability
  • MLOps / Platform Owner: Operational controls

Key Responsibilities:

  • Maintain inventory accuracy
  • Execute required controls and provide evidence
  • Ensure monitoring and incident response readiness
  • Escalate issues to Risk Review Board

4.4 AI Ethics Committee (Advisory)

AttributeDetails
MandateEthical review, bias assessment, fairness evaluation, stakeholder representation
CompositionCross-functional members including external advisors
CadenceAd-hoc for High/Critical risk assessments

5. Roles and Decision Rights

5.1 Enterprise-Level Roles

RolePrimary Responsibility
AI Governance Lead (AI Risk Officer)Owns framework, policy, tiering, reporting, exceptions, and audit readiness
Model/System OwnerOwns performance, safe operation, documentation, and lifecycle compliance
Legal/ComplianceRegulatory interpretation, contract language, prohibited uses, EU AI Act conformity
Security (CISO org)Threat modeling, prompt injection controls, access control, red teaming, adversarial testing
Privacy (DPO org)DPIA/PIA, data minimization, consent, cross-border restrictions, GDPR alignment
Data GovernanceData lineage, quality, allowed sources, sensitive data classification, AI-BOM
Internal AuditIndependent assurance, control testing, audit evidence expectations

5.2 Decision Rights by Risk Tier

Risk TierApproval AuthorityRequired Sign-offsTimeline
LowDomain AI StewardInventory + Baseline controls1-3 days
MediumAI Risk Review BoardEvidence pack review3-5 days
HighAI Risk Review BoardSecurity + Privacy + Legal sign-off5-10 days
CriticalAI Governance BoardIndependent validation + Executive approval10-20 days

6. Operating Processes (End-to-End Lifecycle)

The operating model runs across 8 lifecycle phases with evidence-based gates, aligned to NIST AI RMF 2.0 functions.

Phase 1: Intake & Registration (MAP)

Trigger: Any new AI use case, vendor AI enablement, model change, or major feature update

Required Outputs:

  • AI inventory entry + owner assignments
  • Intended use statement + user groups impacted
  • Data categories used (including sensitive classes)
  • Model type (GenAI/ML/vendor/agentic)
  • Third-party component inventory (AI-BOM)

Phase 2: Classification & Risk Tiering (MAP)

Required Outputs:

  • Risk tier (Low/Medium/High/Critical)
  • Impact assessment (financial, safety, legal, reputational)
  • Autonomy level and human oversight requirement
  • EU AI Act risk category determination
  • GPAI model identification (if applicable)

Phase 3: Risk Assessment Plan (GOVERN → MEASURE)

Required Outputs:

  • Applicable control checklist based on tier
  • Test plan: performance, bias/fairness, robustness, security, privacy
  • Model provenance documentation
  • Third-party risk assessment (for external components)

Phase 4: Build/Configure + Documentation (MEASURE)

Required Outputs (minimum by tier):

  • System Card / Model Card (purpose, limitations, data, evaluation)
  • Threat model (especially GenAI prompt injection + data exfiltration)
  • User-facing transparency disclosures (if applicable)
  • Algorithmic Impact Assessment (High/Critical only)
  • Technical documentation (per EU AI Act Article 11)

Phase 5: Pre-Production Readiness Gate (MANAGE)

Gate Owner: AI Risk Review Board (or Domain for Low Risk)

Required Outputs:

  • Evidence pack complete
  • Monitoring plan and alert thresholds defined
  • Rollback plan and kill switch (as required)
  • Incident runbook linkage (for High/Critical)
  • Human oversight mechanisms verified
  • Conformity assessment (High-Risk EU AI Act systems)

Phase 6: Production Release + Change Control (MANAGE)

Required Outputs:

  • Deployment approval record
  • Change ticket & versioning in model registry
  • Access controls and least privilege
  • EU database registration (if required)
  • Downstream provider notification (GPAI)

Phase 7: Monitoring & Ongoing Oversight (MANAGE)

Required Outputs:

  • Drift/bias monitoring (continuous)
  • Output logging/traceability for GenAI sessions (tier-based retention)
  • Periodic revalidation schedule
  • Serious incident tracking and reporting
  • Post-market monitoring (EU AI Act)

Phase 8: Decommissioning (GOVERN → MANAGE)

Required Outputs:

  • Decommission approval
  • Data retention/secure deletion verification
  • Archive of evidence for audit (retain per retention schedule)
  • Notification to downstream providers

7. Control Enforcement Mechanisms

7.1 Policy-as-Process (Preferred)

Controls are embedded in workflows/tools so compliance is default:

  • GRC intake workflows for registration/tiering
  • CI/CD gates for high-risk production pushes
  • Model registry versioning requirements
  • Central logging/SIEM integration for production AI
  • Automated guardrails for GenAI inputs/outputs

7.2 Exception Handling

Any deviation from required controls must be documented via Policy Exception Workflow:

  • Duration: Maximum 90 days, renewable with justification
  • Compensating controls: Required
  • Named risk owner: Required
  • Formal approval level: Based on tier
  • Tracking: All exceptions logged and reported quarterly

8. Human Oversight and Override Procedures

This section establishes explicit procedures for human oversight, intervention, and override of AI systems, ensuring compliance with EU AI Act Article 14 and ISO/IEC 42001 Annex A.10.5.

8.1 Human Oversight Principles

All AI systems must operate under appropriate human oversight proportional to their risk tier:

Risk TierOversight LevelOverride Authority
LowPassive monitoring with escalation triggersSystem Owner
MediumActive review of flagged outputsDomain AI Steward
HighHuman-in-the-loop for critical decisionsAI Risk Review Board member
CriticalHuman-on-the-loop with real-time intervention capabilityDesignated Senior Officer

8.2 Override Authority Matrix

Override TypeAuthority LevelApproval RequiredMax Response Time
Routine OverrideSystem OwnerSelf-approval with documentation4 hours
Output RejectionModel Owner or StewardSelf-approval with loggingImmediate
System PauseDomain AI StewardNotify AI Risk Officer within 1 hourImmediate
Emergency ShutdownAny authorized operatorNotify Security + AI Risk OfficerImmediate
Kill Switch ActivationAI Risk Officer or CISOPost-hoc ratification within 24 hoursImmediate

8.3 Override Procedures by Scenario

8.3.1 Output Override (Reject AI Decision)

When: AI output is incorrect, inappropriate, biased, or potentially harmful

Procedure:

  1. Operator marks output as rejected in system interface
  2. System logs rejection with reason code and operator ID
  3. Alternative decision is documented (human or fallback)
  4. Notification sent to Model Owner if pattern threshold exceeded (>3 rejections/day)
  5. Weekly aggregation reviewed by Domain AI Steward

8.3.2 System Pause (Temporary Suspension)

When: Suspected malfunction, drift detected, or pending investigation

Procedure:

  1. Authorized operator initiates pause via designated control interface
  2. System enters safe state (queue inputs, stop outputs, maintain state)
  3. Automated notification to System Owner, Domain AI Steward, AI Risk Officer
  4. Investigation initiated within 4 hours
  5. Resume requires documented resolution and steward approval

8.3.3 Emergency Shutdown (Kill Switch)

When: Active harm, security breach, regulatory order, or safety-critical failure

Procedure:

  1. Any authorized operator activates kill switch (physical or software)
  2. System immediately ceases all operations
  3. Automated alerts to: Security Operations, AI Risk Officer, System Owner, Legal
  4. Incident ticket auto-created with severity "Critical"
  5. Post-incident review required within 48 hours
  6. Restart requires AI Risk Review Board approval (High/Critical) or AI Risk Officer approval (Medium)

8.4 Kill Switch Requirements

All High and Critical risk AI systems must implement:

RequirementSpecification
AccessibilityKill switch accessible within 3 clicks/commands from any operator interface
IndependenceKill switch operates independently of AI system (cannot be overridden by AI)
RedundancyMinimum 2 independent kill mechanisms (software + infrastructure)
TestingKill switch tested quarterly (documented in evidence pack)
Response TimeSystem must halt within 30 seconds of activation
State PreservationSystem state preserved for forensic analysis
NotificationAutomated notification to predefined distribution list

8.5 Agentic AI Override Controls

For autonomous and agentic AI systems, additional override mechanisms are required:

ControlRequirement
Action BoundariesAll permitted actions explicitly enumerated; unauthorized actions blocked
Confirmation GatesHigh-impact actions require human confirmation before execution
Rollback CapabilityAll actions must be reversible within defined time window
Activity LoggingComplete audit trail of all actions with timestamps
Resource LimitsHard limits on computational resources, API calls, and execution time
Watchdog TimerAutomatic pause if no human interaction within configurable threshold

8.6 Override Documentation Requirements

All overrides must be documented with:

FieldRequired Information
TimestampDate/time of override (UTC)
OperatorIdentity of person initiating override
SystemAI system identifier and version
Override TypeCategory (output rejection, pause, shutdown, kill switch)
Reason CodeStandardized reason from approved taxonomy
Reason DetailFree-text description of circumstances
ImpactAffected users, transactions, or decisions
ResolutionHow the situation was resolved
Follow-upRequired actions and responsible parties

8.7 Override Audit and Review

Review TypeFrequencyOwnerOutput
Override Log ReviewWeeklyDomain AI StewardPattern identification, training needs
Kill Switch TestingQuarterlySystem OwnerTest documentation, remediation if failed
Override Trend AnalysisMonthlyAI Risk OfficerDashboard metrics, escalation if threshold exceeded
Audit Sample ReviewSemi-annuallyInternal AuditCompliance verification, evidence quality

8.8 Training Requirements

All personnel with override authority must complete:

TrainingFrequencyVerification
Human Oversight FundamentalsAnnualCertification quiz
System-Specific Override ProceduresBefore access grantedHands-on demonstration
Kill Switch DrillQuarterlyParticipation record
Incident Response for AIAnnualTabletop exercise

9. Tooling and Evidence Repositories

ComponentRepository / Tool
AI Inventory / Use Case Registry[GRC system / CMDB / Custom Portal]
Model Registry[MLflow / Cloud Registry / Internal]
Documentation Repository[Confluence / SharePoint] (Controlled Access)
Ticketing[ServiceNow / Jira]
Monitoring[Observability Platform + Model Monitoring]
Security Telemetry[SIEM]
Audit Vault[Secure Evidence Repository]
Guardrails Platform[Input/Output filtering system]

10. Cadence, Reporting, and Metrics

9.1 Governance Cadence

BodyFrequency
AI Governance BoardQuarterly (Monthly during rollout)
AI Risk Review BoardWeekly
Domain Steward ForumBi-weekly
Inventory ReconciliationMonthly
Policy RefreshSemi-annual minimum

9.2 Key Metrics (Executive Dashboard)

MetricTarget
% AI Systems Inventoried100%
% Systems with completed tiering + documentation>95%
Time-to-approve (Low Risk)<5 days
Time-to-approve (High/Critical)<20 days
Policy exceptions (open / overdue)<5 open, 0 overdue
Monitoring alerts vs. time-to-remediate<24 hours
Incident rate (severity-weighted)Trending down
EU AI Act compliance readiness100% for applicable systems

11. Escalation Paths

Escalate immediately to AI Risk Review Board (and/or Governance Board) if:

  • AI system produces harmful, discriminatory, or unsafe outcomes
  • Suspected data leakage / prompt injection / model extraction
  • High-severity drift or unexplained performance degradation
  • Regulatory complaint or legal hold related to AI outputs
  • Shadow AI usage involves sensitive data
  • Agentic AI takes unauthorized actions
  • Multi-agent system exhibits unexpected emergent behavior
  • Serious incident requiring EU AI Office notification

12. Operating Model "Minimum Non-Negotiables"

These rules apply enterprise-wide:

  1. No AI in production without an owner
  2. No AI without inventory registration
  3. Risk tier determines controls (no "one-size-fits-all")
  4. High/Critical systems require formal review gates
  5. Monitoring + rollback plan required for production AI
  6. Exceptions are time-bound and approved
  7. Human oversight required for all High/Critical AI
  8. Agentic AI requires explicit action boundaries
  9. All GPAI models require technical documentation
  10. Prohibited AI practices are never permitted
  11. Serious incidents reported within 24 hours

13. Regulatory Alignment Matrix

NIST AI RMF 2.0 FunctionOperating Model Component
GOVERNCharter, Policy, Governance Bodies, Roles
MAPIntake, Classification, Risk Tiering, AI-BOM
MEASURERisk Assessment, Documentation, Testing, Evaluation
MANAGEGates, Deployment, Monitoring, Incident Response
EU AI Act RequirementOperating Model Component
Risk ClassificationPhase 2: Classification & Risk Tiering
Prohibited PracticesPolicy Artifact, Red Lines
High-Risk ConformityPhase 5: Pre-Production Gate
GPAI ObligationsPhase 4: Documentation, Phase 6: Downstream Notification
Human Oversight (Art. 14)Section 8: Human Oversight and Override Procedures
TransparencySystem Card, User Disclosures
Post-Market MonitoringPhase 7: Monitoring & Oversight
ISO/IEC 42001Operating Model Component
AI PolicyEnterprise AI Policy (Artifact 5)
AI ObjectivesPurpose & Outcomes (Section 1)
Risk AssessmentPhases 2-3
Documented InformationSystem Cards, Evidence Repositories
Monitoring & MeasurementPhase 7, Metrics
Continual ImprovementReview Cadence, Exception Handling
Human Oversight (A.10.5)Section 8: Human Oversight and Override Procedures

Document History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release
2.02026-01-15AI Governance OfficeUpdated for NIST AI RMF 2.0, EU AI Act August 2025, added agentic AI controls

Next Step: Proceed to Artifact 2: AI Risk Governance Charter (Formal mandate, authority, scope, and accountability)


CODITECT AI Risk Management Framework

Document ID: AI-RMF-01 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel