Skip to main content

AI Governance Training Program

Comprehensive Employee & Role-Based Training Curriculum


Document Control

FieldDetails
Document TypeTraining Curriculum
Applies ToAll employees
OwnerAI Governance / Learning & Development
Version1.0
Review FrequencyAnnual

Training Matrix

RoleModule 1Module 2Module 3Module 4Module 5
All Employees✓ Required----
AI Users (Power)✓ Required✓ Required---
Developers✓ Required✓ Required✓ Required--
Managers✓ Required--✓ Required-
Domain Stewards✓ Required✓ Required✓ Required✓ Required-
AI Risk Officer✓ Required✓ Required✓ Required✓ Required✓ Required

Module 1: AI Fundamentals for Everyone

Duration: 30 minutes | Audience: All employees | Frequency: Annual


Slide 1.1: Welcome

AI Governance: Your Role in Responsible AI

Learning Objectives:

  1. Understand what AI is and how we use it
  2. Know the rules for safe AI use
  3. Recognize what's prohibited
  4. Know when and how to report issues

Slide 1.2: What Is AI?

Types of AI You'll Encounter

TypeWhat It DoesExamples
Generative AICreates new contentChatGPT, Claude, Copilot
Predictive AIForecasts outcomesSales predictions, recommendations
Classification AICategorizes dataSpam filters, document routing
Agentic AITakes autonomous actionsAuto-schedulers, workflow bots

Simple Test

If it learns, predicts, or creates—it's probably AI and needs governance.


Slide 1.3: The "No Secrets" Rule

🚫 Never Enter Into Public AI Tools:

CategoryExamples
Personal DataNames, emails, SSNs, health info, customer records
Confidential InfoSource code, financial data, strategies
CredentialsPasswords, API keys, tokens
Legal/ContractsAgreements, legal opinions, M&A info

Why This Matters

  • Public AI may store your inputs
  • Could appear in other users' outputs
  • May violate GDPR, HIPAA, NDA agreements
  • Could expose company to liability

Slide 1.4: Approved vs. Prohibited Tools

✅ Approved Tools

ToolApproved UsesRestrictions
[Internal Tool A]All business contentNo customer PII
[Internal Tool B]Code assistanceNo production secrets
[Vendor Tool C]ResearchEnterprise tier only

❌ Never Use for Work

  • Personal ChatGPT accounts
  • Unvetted AI browser extensions
  • Free-tier AI services
  • AI tools from unknown providers

Need a New Tool?

Submit request via [AI Tool Request Form]


Slide 1.5: Prohibited AI Uses

🛑 Absolute Red Lines (Violations = Termination Risk)

Prohibited UseWhy
Social scoring systemsIllegal under EU AI Act
Subliminal manipulationIllegal under EU AI Act
Exploiting vulnerable groupsIllegal under EU AI Act
Real-time biometric surveillanceIllegal under EU AI Act
Emotion recognition (work/school)Illegal under EU AI Act
Creating deepfakes of real peopleDeception/defamation
Final HR decisions (hire/fire)Human judgment required
Medical/legal advice without oversightProfessional standards

Slide 1.6: Human Oversight Is Required

The Golden Rule

You are responsible for any AI output you publish, send, or act upon.

What This Means

AI OutputYour Responsibility
Draft emailReview before sending
Generated codeTest before committing
Analysis reportVerify key facts
RecommendationMake the final decision

AI Can Be Wrong

  • Hallucinations: Confidently states false information
  • Bias: Reflects patterns in training data
  • Context blindness: Misses nuance you would catch

Slide 1.7: Reporting AI Issues

When to Report

SituationUrgencyChannel
AI produces harmful contentImmediate[Incident hotline]
AI leaks sensitive dataImmediate[Incident hotline]
AI provides dangerous adviceImmediate[Incident hotline]
AI seems biased/unfair24 hours[AI Governance email]
AI quality issuesNext business day[Ticket system]

How to Report

  • Email: ai-governance@[company].com
  • Slack: #ai-incidents
  • Form: [Internal link]

Slide 1.8: Quick Reference Card

✅ DO

  • Use only approved AI tools
  • Review all AI outputs before use
  • Report problems immediately
  • Protect sensitive information
  • Ask before trying new AI tools

❌ DON'T

  • Enter PII into public AI
  • Trust AI blindly
  • Use personal AI for work
  • Share AI credentials
  • Ignore AI errors

Slide 1.9: Knowledge Check

Q1: Can you paste customer emails into ChatGPT?

  • A) Yes, if work-related
  • B) No, never use PII in public AI
  • C) Only for internal customers

Q2: Who's responsible for errors in AI-generated work you publish?

  • A) The AI company
  • B) You - the human who published it
  • C) Your manager

Q3: You find a helpful new AI tool. What do you do?

  • A) Start using it
  • B) Ask a colleague
  • C) Request approval from AI Governance

Module 2: AI Power User Training

Duration: 45 minutes | Audience: Heavy AI users | Frequency: Annual


Slide 2.1: Effective Prompting

Prompting Best Practices

TechniqueExample
Be Specific"Write a 200-word summary of Q3 results for the sales team"
Provide Context"As a healthcare compliance officer reviewing..."
Set Constraints"Use only publicly available information"
Request Format"Present as a table with columns for..."

Security in Prompts

  • Never include real names, IDs, or credentials
  • Use placeholders: [CUSTOMER_NAME], [PROJECT_CODE]
  • Anonymize data before input

Slide 2.2: Verifying AI Outputs

Verification Checklist

CheckAction
Factual ClaimsCross-reference with authoritative sources
StatisticsVerify numbers against original data
CitationsCheck that referenced sources exist
LogicEnsure reasoning is sound
CompletenessConfirm all aspects addressed
BiasCheck for stereotypes or unfairness

Red Flags

  • Overly confident assertions
  • Round numbers without sources
  • Claims that seem "too perfect"
  • Generic responses to specific questions

Slide 2.3: GenAI for Different Tasks

Task-Specific Guidelines

TaskBest PracticeVerification
WritingEdit for voice/accuracyFact-check claims
ResearchUse as starting pointVerify sources
AnalysisProvide clean dataValidate calculations
CodingReview every lineTest thoroughly
TranslationCheck cultural contextNative speaker review

Slide 2.4: Managing AI Conversations

Session Management

PracticeWhy
Start fresh for new topicsPrevents context confusion
Clear session after sensitive workReduces data exposure
Don't rely on AI "memory"May not persist correctly
Save important outputs locallyAI history can disappear

Data Hygiene

  • Scrub PII before pasting
  • Use synthetic data for examples
  • Export/delete sensitive conversations

Module 3: AI Builder Training

Duration: 90 minutes | Audience: Developers, Data Scientists | Frequency: Annual + Updates


Slide 3.1: The AI Development Lifecycle

Eight Phases

┌────────┐  ┌────────┐  ┌────────┐  ┌────────┐
│1.INTAKE│─▶│2.CLASS-│─▶│3.ASSESS│─▶│4.BUILD │
│ │ │ IFY │ │ │ │ │
└────────┘ └────────┘ └────────┘ └────────┘

┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│8.DECOM-│◀─│7.MONIT-│◀─│6.DEPLOY│◀─│5.GATE │
│ MISSION│ │ OR │ │ │ │ │
└────────┘ └────────┘ └────────┘ └────────┘

Your Role

  • Submit intake form BEFORE starting
  • Document in System Card DURING development
  • Pass pre-production gate BEFORE deployment
  • Monitor AFTER deployment

Slide 3.2: Risk Classification for Builders

Classification Factors

FactorScore 1Score 4
Data SensitivityPublicPII/Health/Financial
AutonomyAdvisory onlyAutonomous actions
Impact ScopeInternal onlyExternal customers
Scale<100 users>10,000 users

Tier = Maximum Score

TierYour Obligations
LowBasic documentation
Medium+ Security review
High+ AIA + Board approval
Critical+ Red teaming + Executive sign-off

Slide 3.3: Secure AI Development

Code Security

ControlImplementation
No hardcoded secretsUse Vault, AWS Secrets Manager
Safe model serializationSafetensors, not Pickle
Dependency scanningSnyk, Dependabot for ML libs
Input validationSanitize all inputs
Output filteringPII detection, toxicity check

Model Security

ThreatMitigation
Prompt injectionInput filtering, instruction isolation
Model extractionRate limiting, access controls
Training data poisoningData validation, provenance
Adversarial inputsRobustness testing

Slide 3.4: Required Documentation

System Card (All Production AI)

SectionContent
IdentityName, version, owner, risk tier
PurposeIntended use, out-of-scope uses
DataTraining data summary, PII handling
PerformanceMetrics, evaluation results
RisksKnown risks, mitigations
OperationsMonitoring, incident playbook

AI-BOM (For High-Risk+)

SectionContent
Model ProvenanceOrigin, training compute, version
DependenciesFrameworks, libraries, versions
Data SourcesTraining data lineage
Third PartiesExternal APIs, services

Slide 3.5: Testing Requirements

By Risk Tier

TierRequired Testing
LowFunctional testing
Medium+ Security scan, performance
High+ Bias testing, red teaming
Critical+ External audit, adversarial

Bias Testing

MetricDefinitionThreshold
Demographic ParityEqual positive rates across groups<10% difference
Equalized OddsEqual TPR/FPR across groups<5% difference
Disparate ImpactRatio of selection rates>0.8

Slide 3.6: Agentic AI Development

Additional Controls for Autonomous Agents

ControlRequirement
Action BoundariesExplicit whitelist of permitted actions
Kill SwitchTested monthly, <30 second activation
Rate LimitsToken/action budgets
Approval GatesHuman approval for sensitive actions
Audit TrailComplete action logging
SandboxingIsolated test environment

Multi-Agent Systems

ControlRequirement
Agent RegistrationUnique identity per agent
Communication LoggingAll inter-agent messages logged
Cascade PreventionCircuit breakers between agents
Orchestrator OversightCentral coordination point

Slide 3.7: Pre-Production Gate

Checklist (All AI)

  • Intake form submitted
  • Risk tier assigned
  • System Card complete
  • Functional testing passed
  • Owner assigned
  • Monitoring configured

Additional for High-Risk

  • Security review passed
  • Bias testing complete
  • AIA completed
  • Risk Review Board approval
  • Human oversight plan documented
  • Incident playbook ready

Module 4: AI Leader Training

Duration: 60 minutes | Audience: Managers, Product Owners, Stewards | Frequency: Annual


Slide 4.1: Your Governance Responsibilities

Domain Steward Accountabilities

ResponsibilityActions
InventoryKnow all AI in your domain
OwnershipEvery AI system has an assigned owner
ClassificationCorrect risk tier applied
ComplianceRequired gates passed
MonitoringRegular performance review
EscalationReport incidents promptly

Slide 4.2: Third-Party AI Management

Before Procuring AI Vendors

RequirementEvidence
Security certificationSOC 2 Type II or ISO 27001
AI governance practicesCompleted questionnaire
Data processing agreementGDPR/CCPA compliant
Training data policyOpt-out confirmation
IP indemnificationFor GenAI vendors

Ongoing Vendor Management

ActivityFrequency
Performance reviewMonthly
Security attestationAnnual
Contract complianceQuarterly
Incident reviewAs needed

Slide 4.3: Incident Management

Incident Severity

LevelDefinitionResponse
P1-CriticalSafety risk, regulatory breach15 min escalation
P2-HighMajor malfunction, data exposure1 hour escalation
P3-MediumDegraded performance, bias detected4 hour response
P4-LowMinor issues, quality concerns24 hour response

Your Role in Incidents

  1. Assess severity immediately
  2. Escalate per severity level
  3. Document what happened
  4. Participate in post-incident review
  5. Implement corrective actions

Slide 4.4: Regulatory Awareness

Key Dates

DateRegulationImpact
Feb 2025EU AI Act - ProhibitedMust not use prohibited AI
Aug 2025EU AI Act - GPAIGPAI providers must comply
Aug 2026EU AI Act - High-RiskFull conformity required

Your Obligation

  • Know which AI systems in your domain are high-risk
  • Ensure documentation is complete
  • Be ready for audit inquiries

Module 5: Advanced AI Governance

Duration: 120 minutes | Audience: AI Risk Officers, Compliance | Frequency: Annual + Updates


Slide 5.1: Framework Architecture

18-Document Structure

LayerDocumentsPurpose
Governance01-02Authority, structure
Classification03-04Risk tiering
Policy05Rules, prohibitions
Documentation06-07, 13System records
Implementation08-09Execution plans
Compliance11, 14, 18Regulatory alignment
Operations15-16Ongoing management
Guidance10, 12, 17Stakeholder communication

Slide 5.2: Regulatory Deep Dive

EU AI Act Compliance

CategoryRequirementsFramework Coverage
Prohibited (Art. 5)8 banned practicesPolicy §3.1
GPAI (Art. 51-55)Transparency, documentationDoc 14
High-Risk (Annex III)Conformity assessmentDocs 03, 06, 07
Transparency (Art. 50)User disclosureDoc 05

NIST AI RMF 2.0 Mapping

FunctionSubcategoriesFramework Coverage
GOVERN6 categoriesDoc 01, 02, 05
MAP5 categoriesDoc 03, 04, 13
MEASURE4 categoriesDoc 06, 07, 16
MANAGE4 categoriesDoc 08, 15, 16

Slide 5.3: Audit Preparation

Evidence Repository

RequirementEvidence TypeLocation
GovernanceCharter, meeting minutes[Repository]
Risk AssessmentIntake forms, AIAs[Repository]
DocumentationSystem Cards, AI-BOMs[Repository]
MonitoringDashboards, reports[Repository]
TrainingCompletion records[LMS]
IncidentsResponse records[Ticket system]

Slide 5.4: Continuous Improvement

Metrics Dashboard

MetricTargetMeasurement
Inventory coverage100%AI systems / total known
Documentation completion100%System Cards / registered AI
High-risk approval rate100%Approved / deployed
Incident response time<24hAverage MTTR
Training completion>90%Completed / required

Review Cadence

ActivityFrequency
Policy reviewAnnual
Risk matrix calibrationAnnual
Framework assessmentAnnual
Regulatory monitoringQuarterly
Metrics reviewMonthly

Appendix: Training Resources

Quick Reference Materials

ResourceLocation
AI Policy[Intranet link]
Approved Tools List[Intranet link]
Tool Request Form[Form link]
Incident Reporting[Link]
System Card Template[Link]
AI-BOM Template[Link]

Contact Information

RoleContact
AI Risk Officer[Email]
AI Governance Teamai-governance@company.com
Incident Hotline[Phone/Slack]
Training Support[LMS help]

Training Completion Certification

Acknowledgment

By completing this training, I acknowledge:

  • I understand the AI governance policy
  • I will only use approved AI tools
  • I will not enter sensitive data into public AI
  • I will review AI outputs before use
  • I will report AI issues promptly
  • I understand my role-specific responsibilities

Signature: ________________________
Date: ________________________
Employee ID: ________________________
Modules Completed: ________________________


Document Version: 1.0
Training Owner: AI Governance Team
Next Review: [Date + 1 year]


CODITECT AI Risk Management Framework

Document ID: AI-RMF-20 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel