AI Governance Operating Model
Enterprise Framework | NIST AI RMF 2.0 & EU AI Act Aligned
Document Control
| Field | Details |
|---|---|
| Document Type | Operating Model |
| Owner | Enterprise AI Governance Lead (AI Risk Officer) |
| Executive Sponsor | [CRO / CIO / CDO / COO] |
| Approvers | AI Governance Board; Legal; CISO; Privacy Officer; Internal Audit |
| Effective Date | 2026-01-15 |
| Review Cadence | Annual minimum; ad hoc upon regulatory/major risk events |
| Version | v2.0 |
| Framework Alignment | NIST AI RMF 2.0 (GOVERN, MAP, MEASURE, MANAGE), EU AI Act, ISO/IEC 42001 |
1. Purpose and Outcomes
1.1 Purpose
Establish a single enterprise operating model that defines how AI is governed across the organization—clarifying accountability, decision rights, processes, controls, and evidence to ensure AI is trusted, compliant, secure, and fit-for-purpose throughout its lifecycle.
1.2 Outcomes (What "Good" Looks Like)
- Inventory: Every AI system is known, classified, and risk-tiered
- Gated Assurance: AI initiatives pass standard gates before production use (including vendor AI)
- Control Efficacy: AI controls are risk-based, repeatable, and auditable
- Continuous Monitoring: Model behavior is monitored (drift, bias, security, misuse) and incidents are managed
- Regulatory Readiness: Framework aligns with NIST AI RMF 2.0, EU AI Act, and ISO/IEC 42001
- Safe Value: Business value is enabled without compromising safety, privacy, fairness, and security
2. Scope and Governance Perimeter
2.1 In Scope (Minimum)
| Category | Examples |
|---|---|
| Internal AI/ML | Trained models, fine-tunes, classifiers, recommenders, forecasting |
| Generative AI/LLMs | Chatbots, copilots, summarization, content generation, code assistants |
| Agentic AI | Autonomous agents, multi-agent systems, tool-using AI, workflow automation |
| Vendor-Embedded AI | SaaS features with AI/automation, decisioning engines |
| Material Automation | Risk scoring, eligibility, pricing, HR screening, fraud detection |
| Employee Usage | Corporate use of AI tools (including "shadow AI" detection/containment) |
| GPAI Models | General-purpose AI models placed on EU market (per EU AI Act) |
2.2 Out of Scope
| Exclusion | Rationale | Approval Required |
|---|---|---|
| Standard IT automation scripts (deterministic logic) | No ML/AI component | AI Governance Board |
| Non-AI analytics (basic BI/dashboards) | No algorithmic learning | AI Governance Board |
| Isolated R&D sandboxes (no production data) | Experimental only | AI Governance Board |
2.3 Definition of "AI System" (Operational)
Any system that makes or materially supports a decision, prediction, recommendation, generation, or classification using statistical learning, machine learning, deep learning, LLMs, or similar techniques, including:
- Vendor tools where the model is not directly accessible
- Agentic systems that can perceive, reason, plan, and act autonomously
- Multi-agent orchestrations that coordinate multiple AI components
3. Governance Model: Hybrid Federated
3.1 Model Type
Hybrid (Central Governance + Federated Execution)
| Component | Responsibility |
|---|---|
| Centralized | Enterprise policies, minimum controls, risk-tiering rules, exceptions, auditability, mandatory documentation, regulatory compliance, enterprise reporting |
| Federated | Domain teams execute delivery and controls locally with domain AI stewards and system/model owners |
3.2 Why Hybrid
- Scales across business units while preserving enterprise consistency
- Enables fast product delivery with clear risk guardrails
- Reduces gaps caused by tool/vendor diversity
- Supports both agile development and regulatory compliance
3.3 Core Principle
Decision rights scale with risk. High/Critical risk AI requires centralized review; Low-risk AI is governed through standardized "policy-as-process" controls.
4. Governance Bodies and Accountability Chain
4.1 AI Governance Board (Strategic Oversight)
| Attribute | Details |
|---|---|
| Mandate | Enterprise policy approval, risk appetite alignment, escalations, critical approvals |
| Chair | Executive Sponsor (or delegate) |
| Quorum | Chair + 50% of voting members |
| Cadence | Quarterly (monthly during initial rollout) |
Voting Members:
- AI Governance Lead (AI Risk Officer)
- Chief Legal Officer / General Counsel
- Chief Information Security Officer (CISO)
- Chief Privacy Officer / Data Protection Officer
- Chief Data Officer (CDO)
- Product/Engineering Leadership
- HR Leadership (for workforce AI)
- Internal Audit (observer or voting)
Key Decisions:
- Approve AI governance policies and risk appetite
- Approve/deny Critical Risk deployments and major exceptions
- Require remediation plans for systemic risk
- Set enterprise AI strategy and investment priorities
4.2 AI Risk Review Board (Tactical Gatekeeper)
| Attribute | Details |
|---|---|
| Mandate | Intake triage, risk tiering confirmation, go/no-go at stage gates for Medium–Critical |
| Chair | AI Governance Lead (or Model Risk Lead) |
| Quorum | Chair + Legal + Security + Privacy |
| Cadence | Weekly (or as required by pipeline volume) |
Standing Members:
- Privacy Operations Lead
- Cybersecurity Architect
- Data Governance Lead
- Legal Counsel (Product/IP)
- Model Risk Management Lead
- Compliance Officer
Key Decisions:
- Confirm tiering and required evidence
- Approve readiness at pre-prod and prod release gates
- Trigger red teaming, independent validation, or enhanced monitoring
- Review Algorithmic Impact Assessments
4.3 Domain AI Stewardship (Federated Execution)
Each Business Unit/Domain provides:
- Domain AI Steward: Governance champion
- AI System Owners / Model Owners: Technical accountability
- AI Product Owner: Business accountability
- MLOps / Platform Owner: Operational controls
Key Responsibilities:
- Maintain inventory accuracy
- Execute required controls and provide evidence
- Ensure monitoring and incident response readiness
- Escalate issues to Risk Review Board
4.4 AI Ethics Committee (Advisory)
| Attribute | Details |
|---|---|
| Mandate | Ethical review, bias assessment, fairness evaluation, stakeholder representation |
| Composition | Cross-functional members including external advisors |
| Cadence | Ad-hoc for High/Critical risk assessments |
5. Roles and Decision Rights
5.1 Enterprise-Level Roles
| Role | Primary Responsibility |
|---|---|
| AI Governance Lead (AI Risk Officer) | Owns framework, policy, tiering, reporting, exceptions, and audit readiness |
| Model/System Owner | Owns performance, safe operation, documentation, and lifecycle compliance |
| Legal/Compliance | Regulatory interpretation, contract language, prohibited uses, EU AI Act conformity |
| Security (CISO org) | Threat modeling, prompt injection controls, access control, red teaming, adversarial testing |
| Privacy (DPO org) | DPIA/PIA, data minimization, consent, cross-border restrictions, GDPR alignment |
| Data Governance | Data lineage, quality, allowed sources, sensitive data classification, AI-BOM |
| Internal Audit | Independent assurance, control testing, audit evidence expectations |
5.2 Decision Rights by Risk Tier
| Risk Tier | Approval Authority | Required Sign-offs | Timeline |
|---|---|---|---|
| Low | Domain AI Steward | Inventory + Baseline controls | 1-3 days |
| Medium | AI Risk Review Board | Evidence pack review | 3-5 days |
| High | AI Risk Review Board | Security + Privacy + Legal sign-off | 5-10 days |
| Critical | AI Governance Board | Independent validation + Executive approval | 10-20 days |
6. Operating Processes (End-to-End Lifecycle)
The operating model runs across 8 lifecycle phases with evidence-based gates, aligned to NIST AI RMF 2.0 functions.
Phase 1: Intake & Registration (MAP)
Trigger: Any new AI use case, vendor AI enablement, model change, or major feature update
Required Outputs:
- AI inventory entry + owner assignments
- Intended use statement + user groups impacted
- Data categories used (including sensitive classes)
- Model type (GenAI/ML/vendor/agentic)
- Third-party component inventory (AI-BOM)
Phase 2: Classification & Risk Tiering (MAP)
Required Outputs:
- Risk tier (Low/Medium/High/Critical)
- Impact assessment (financial, safety, legal, reputational)
- Autonomy level and human oversight requirement
- EU AI Act risk category determination
- GPAI model identification (if applicable)
Phase 3: Risk Assessment Plan (GOVERN → MEASURE)
Required Outputs:
- Applicable control checklist based on tier
- Test plan: performance, bias/fairness, robustness, security, privacy
- Model provenance documentation
- Third-party risk assessment (for external components)
Phase 4: Build/Configure + Documentation (MEASURE)
Required Outputs (minimum by tier):
- System Card / Model Card (purpose, limitations, data, evaluation)
- Threat model (especially GenAI prompt injection + data exfiltration)
- User-facing transparency disclosures (if applicable)
- Algorithmic Impact Assessment (High/Critical only)
- Technical documentation (per EU AI Act Article 11)
Phase 5: Pre-Production Readiness Gate (MANAGE)
Gate Owner: AI Risk Review Board (or Domain for Low Risk)
Required Outputs:
- Evidence pack complete
- Monitoring plan and alert thresholds defined
- Rollback plan and kill switch (as required)
- Incident runbook linkage (for High/Critical)
- Human oversight mechanisms verified
- Conformity assessment (High-Risk EU AI Act systems)
Phase 6: Production Release + Change Control (MANAGE)
Required Outputs:
- Deployment approval record
- Change ticket & versioning in model registry
- Access controls and least privilege
- EU database registration (if required)
- Downstream provider notification (GPAI)
Phase 7: Monitoring & Ongoing Oversight (MANAGE)
Required Outputs:
- Drift/bias monitoring (continuous)
- Output logging/traceability for GenAI sessions (tier-based retention)
- Periodic revalidation schedule
- Serious incident tracking and reporting
- Post-market monitoring (EU AI Act)
Phase 8: Decommissioning (GOVERN → MANAGE)
Required Outputs:
- Decommission approval
- Data retention/secure deletion verification
- Archive of evidence for audit (retain per retention schedule)
- Notification to downstream providers
7. Control Enforcement Mechanisms
7.1 Policy-as-Process (Preferred)
Controls are embedded in workflows/tools so compliance is default:
- GRC intake workflows for registration/tiering
- CI/CD gates for high-risk production pushes
- Model registry versioning requirements
- Central logging/SIEM integration for production AI
- Automated guardrails for GenAI inputs/outputs
7.2 Exception Handling
Any deviation from required controls must be documented via Policy Exception Workflow:
- Duration: Maximum 90 days, renewable with justification
- Compensating controls: Required
- Named risk owner: Required
- Formal approval level: Based on tier
- Tracking: All exceptions logged and reported quarterly
8. Human Oversight and Override Procedures
This section establishes explicit procedures for human oversight, intervention, and override of AI systems, ensuring compliance with EU AI Act Article 14 and ISO/IEC 42001 Annex A.10.5.
8.1 Human Oversight Principles
All AI systems must operate under appropriate human oversight proportional to their risk tier:
| Risk Tier | Oversight Level | Override Authority |
|---|---|---|
| Low | Passive monitoring with escalation triggers | System Owner |
| Medium | Active review of flagged outputs | Domain AI Steward |
| High | Human-in-the-loop for critical decisions | AI Risk Review Board member |
| Critical | Human-on-the-loop with real-time intervention capability | Designated Senior Officer |
8.2 Override Authority Matrix
| Override Type | Authority Level | Approval Required | Max Response Time |
|---|---|---|---|
| Routine Override | System Owner | Self-approval with documentation | 4 hours |
| Output Rejection | Model Owner or Steward | Self-approval with logging | Immediate |
| System Pause | Domain AI Steward | Notify AI Risk Officer within 1 hour | Immediate |
| Emergency Shutdown | Any authorized operator | Notify Security + AI Risk Officer | Immediate |
| Kill Switch Activation | AI Risk Officer or CISO | Post-hoc ratification within 24 hours | Immediate |
8.3 Override Procedures by Scenario
8.3.1 Output Override (Reject AI Decision)
When: AI output is incorrect, inappropriate, biased, or potentially harmful
Procedure:
- Operator marks output as rejected in system interface
- System logs rejection with reason code and operator ID
- Alternative decision is documented (human or fallback)
- Notification sent to Model Owner if pattern threshold exceeded (>3 rejections/day)
- Weekly aggregation reviewed by Domain AI Steward
8.3.2 System Pause (Temporary Suspension)
When: Suspected malfunction, drift detected, or pending investigation
Procedure:
- Authorized operator initiates pause via designated control interface
- System enters safe state (queue inputs, stop outputs, maintain state)
- Automated notification to System Owner, Domain AI Steward, AI Risk Officer
- Investigation initiated within 4 hours
- Resume requires documented resolution and steward approval
8.3.3 Emergency Shutdown (Kill Switch)
When: Active harm, security breach, regulatory order, or safety-critical failure
Procedure:
- Any authorized operator activates kill switch (physical or software)
- System immediately ceases all operations
- Automated alerts to: Security Operations, AI Risk Officer, System Owner, Legal
- Incident ticket auto-created with severity "Critical"
- Post-incident review required within 48 hours
- Restart requires AI Risk Review Board approval (High/Critical) or AI Risk Officer approval (Medium)
8.4 Kill Switch Requirements
All High and Critical risk AI systems must implement:
| Requirement | Specification |
|---|---|
| Accessibility | Kill switch accessible within 3 clicks/commands from any operator interface |
| Independence | Kill switch operates independently of AI system (cannot be overridden by AI) |
| Redundancy | Minimum 2 independent kill mechanisms (software + infrastructure) |
| Testing | Kill switch tested quarterly (documented in evidence pack) |
| Response Time | System must halt within 30 seconds of activation |
| State Preservation | System state preserved for forensic analysis |
| Notification | Automated notification to predefined distribution list |
8.5 Agentic AI Override Controls
For autonomous and agentic AI systems, additional override mechanisms are required:
| Control | Requirement |
|---|---|
| Action Boundaries | All permitted actions explicitly enumerated; unauthorized actions blocked |
| Confirmation Gates | High-impact actions require human confirmation before execution |
| Rollback Capability | All actions must be reversible within defined time window |
| Activity Logging | Complete audit trail of all actions with timestamps |
| Resource Limits | Hard limits on computational resources, API calls, and execution time |
| Watchdog Timer | Automatic pause if no human interaction within configurable threshold |
8.6 Override Documentation Requirements
All overrides must be documented with:
| Field | Required Information |
|---|---|
| Timestamp | Date/time of override (UTC) |
| Operator | Identity of person initiating override |
| System | AI system identifier and version |
| Override Type | Category (output rejection, pause, shutdown, kill switch) |
| Reason Code | Standardized reason from approved taxonomy |
| Reason Detail | Free-text description of circumstances |
| Impact | Affected users, transactions, or decisions |
| Resolution | How the situation was resolved |
| Follow-up | Required actions and responsible parties |
8.7 Override Audit and Review
| Review Type | Frequency | Owner | Output |
|---|---|---|---|
| Override Log Review | Weekly | Domain AI Steward | Pattern identification, training needs |
| Kill Switch Testing | Quarterly | System Owner | Test documentation, remediation if failed |
| Override Trend Analysis | Monthly | AI Risk Officer | Dashboard metrics, escalation if threshold exceeded |
| Audit Sample Review | Semi-annually | Internal Audit | Compliance verification, evidence quality |
8.8 Training Requirements
All personnel with override authority must complete:
| Training | Frequency | Verification |
|---|---|---|
| Human Oversight Fundamentals | Annual | Certification quiz |
| System-Specific Override Procedures | Before access granted | Hands-on demonstration |
| Kill Switch Drill | Quarterly | Participation record |
| Incident Response for AI | Annual | Tabletop exercise |
9. Tooling and Evidence Repositories
| Component | Repository / Tool |
|---|---|
| AI Inventory / Use Case Registry | [GRC system / CMDB / Custom Portal] |
| Model Registry | [MLflow / Cloud Registry / Internal] |
| Documentation Repository | [Confluence / SharePoint] (Controlled Access) |
| Ticketing | [ServiceNow / Jira] |
| Monitoring | [Observability Platform + Model Monitoring] |
| Security Telemetry | [SIEM] |
| Audit Vault | [Secure Evidence Repository] |
| Guardrails Platform | [Input/Output filtering system] |
10. Cadence, Reporting, and Metrics
9.1 Governance Cadence
| Body | Frequency |
|---|---|
| AI Governance Board | Quarterly (Monthly during rollout) |
| AI Risk Review Board | Weekly |
| Domain Steward Forum | Bi-weekly |
| Inventory Reconciliation | Monthly |
| Policy Refresh | Semi-annual minimum |
9.2 Key Metrics (Executive Dashboard)
| Metric | Target |
|---|---|
| % AI Systems Inventoried | 100% |
| % Systems with completed tiering + documentation | >95% |
| Time-to-approve (Low Risk) | <5 days |
| Time-to-approve (High/Critical) | <20 days |
| Policy exceptions (open / overdue) | <5 open, 0 overdue |
| Monitoring alerts vs. time-to-remediate | <24 hours |
| Incident rate (severity-weighted) | Trending down |
| EU AI Act compliance readiness | 100% for applicable systems |
11. Escalation Paths
Escalate immediately to AI Risk Review Board (and/or Governance Board) if:
- AI system produces harmful, discriminatory, or unsafe outcomes
- Suspected data leakage / prompt injection / model extraction
- High-severity drift or unexplained performance degradation
- Regulatory complaint or legal hold related to AI outputs
- Shadow AI usage involves sensitive data
- Agentic AI takes unauthorized actions
- Multi-agent system exhibits unexpected emergent behavior
- Serious incident requiring EU AI Office notification
12. Operating Model "Minimum Non-Negotiables"
These rules apply enterprise-wide:
- No AI in production without an owner
- No AI without inventory registration
- Risk tier determines controls (no "one-size-fits-all")
- High/Critical systems require formal review gates
- Monitoring + rollback plan required for production AI
- Exceptions are time-bound and approved
- Human oversight required for all High/Critical AI
- Agentic AI requires explicit action boundaries
- All GPAI models require technical documentation
- Prohibited AI practices are never permitted
- Serious incidents reported within 24 hours
13. Regulatory Alignment Matrix
| NIST AI RMF 2.0 Function | Operating Model Component |
|---|---|
| GOVERN | Charter, Policy, Governance Bodies, Roles |
| MAP | Intake, Classification, Risk Tiering, AI-BOM |
| MEASURE | Risk Assessment, Documentation, Testing, Evaluation |
| MANAGE | Gates, Deployment, Monitoring, Incident Response |
| EU AI Act Requirement | Operating Model Component |
|---|---|
| Risk Classification | Phase 2: Classification & Risk Tiering |
| Prohibited Practices | Policy Artifact, Red Lines |
| High-Risk Conformity | Phase 5: Pre-Production Gate |
| GPAI Obligations | Phase 4: Documentation, Phase 6: Downstream Notification |
| Human Oversight (Art. 14) | Section 8: Human Oversight and Override Procedures |
| Transparency | System Card, User Disclosures |
| Post-Market Monitoring | Phase 7: Monitoring & Oversight |
| ISO/IEC 42001 | Operating Model Component |
|---|---|
| AI Policy | Enterprise AI Policy (Artifact 5) |
| AI Objectives | Purpose & Outcomes (Section 1) |
| Risk Assessment | Phases 2-3 |
| Documented Information | System Cards, Evidence Repositories |
| Monitoring & Measurement | Phase 7, Metrics |
| Continual Improvement | Review Cadence, Exception Handling |
| Human Oversight (A.10.5) | Section 8: Human Oversight and Override Procedures |
Document History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
| 2.0 | 2026-01-15 | AI Governance Office | Updated for NIST AI RMF 2.0, EU AI Act August 2025, added agentic AI controls |
Next Step: Proceed to Artifact 2: AI Risk Governance Charter (Formal mandate, authority, scope, and accountability)
CODITECT AI Risk Management Framework
Document ID: AI-RMF-01 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel