AI Governance Training Program
Comprehensive Employee & Role-Based Training Curriculum
Document Control
| Field | Details |
|---|
| Document Type | Training Curriculum |
| Applies To | All employees |
| Owner | AI Governance / Learning & Development |
| Version | 1.0 |
| Review Frequency | Annual |
Training Matrix
| Role | Module 1 | Module 2 | Module 3 | Module 4 | Module 5 |
|---|
| All Employees | ✓ Required | - | - | - | - |
| AI Users (Power) | ✓ Required | ✓ Required | - | - | - |
| Developers | ✓ Required | ✓ Required | ✓ Required | - | - |
| Managers | ✓ Required | - | - | ✓ Required | - |
| Domain Stewards | ✓ Required | ✓ Required | ✓ Required | ✓ Required | - |
| AI Risk Officer | ✓ Required | ✓ Required | ✓ Required | ✓ Required | ✓ Required |
Module 1: AI Fundamentals for Everyone
Duration: 30 minutes | Audience: All employees | Frequency: Annual
Slide 1.1: Welcome
AI Governance: Your Role in Responsible AI
Learning Objectives:
- Understand what AI is and how we use it
- Know the rules for safe AI use
- Recognize what's prohibited
- Know when and how to report issues
Slide 1.2: What Is AI?
Types of AI You'll Encounter
| Type | What It Does | Examples |
|---|
| Generative AI | Creates new content | ChatGPT, Claude, Copilot |
| Predictive AI | Forecasts outcomes | Sales predictions, recommendations |
| Classification AI | Categorizes data | Spam filters, document routing |
| Agentic AI | Takes autonomous actions | Auto-schedulers, workflow bots |
Simple Test
If it learns, predicts, or creates—it's probably AI and needs governance.
Slide 1.3: The "No Secrets" Rule
| Category | Examples |
|---|
| Personal Data | Names, emails, SSNs, health info, customer records |
| Confidential Info | Source code, financial data, strategies |
| Credentials | Passwords, API keys, tokens |
| Legal/Contracts | Agreements, legal opinions, M&A info |
Why This Matters
- Public AI may store your inputs
- Could appear in other users' outputs
- May violate GDPR, HIPAA, NDA agreements
- Could expose company to liability
| Tool | Approved Uses | Restrictions |
|---|
| [Internal Tool A] | All business content | No customer PII |
| [Internal Tool B] | Code assistance | No production secrets |
| [Vendor Tool C] | Research | Enterprise tier only |
❌ Never Use for Work
- Personal ChatGPT accounts
- Unvetted AI browser extensions
- Free-tier AI services
- AI tools from unknown providers
Submit request via [AI Tool Request Form]
Slide 1.5: Prohibited AI Uses
🛑 Absolute Red Lines (Violations = Termination Risk)
| Prohibited Use | Why |
|---|
| Social scoring systems | Illegal under EU AI Act |
| Subliminal manipulation | Illegal under EU AI Act |
| Exploiting vulnerable groups | Illegal under EU AI Act |
| Real-time biometric surveillance | Illegal under EU AI Act |
| Emotion recognition (work/school) | Illegal under EU AI Act |
| Creating deepfakes of real people | Deception/defamation |
| Final HR decisions (hire/fire) | Human judgment required |
| Medical/legal advice without oversight | Professional standards |
Slide 1.6: Human Oversight Is Required
The Golden Rule
You are responsible for any AI output you publish, send, or act upon.
What This Means
| AI Output | Your Responsibility |
|---|
| Draft email | Review before sending |
| Generated code | Test before committing |
| Analysis report | Verify key facts |
| Recommendation | Make the final decision |
AI Can Be Wrong
- Hallucinations: Confidently states false information
- Bias: Reflects patterns in training data
- Context blindness: Misses nuance you would catch
Slide 1.7: Reporting AI Issues
When to Report
| Situation | Urgency | Channel |
|---|
| AI produces harmful content | Immediate | [Incident hotline] |
| AI leaks sensitive data | Immediate | [Incident hotline] |
| AI provides dangerous advice | Immediate | [Incident hotline] |
| AI seems biased/unfair | 24 hours | [AI Governance email] |
| AI quality issues | Next business day | [Ticket system] |
How to Report
- Email: ai-governance@[company].com
- Slack: #ai-incidents
- Form: [Internal link]
Slide 1.8: Quick Reference Card
✅ DO
- Use only approved AI tools
- Review all AI outputs before use
- Report problems immediately
- Protect sensitive information
- Ask before trying new AI tools
❌ DON'T
- Enter PII into public AI
- Trust AI blindly
- Use personal AI for work
- Share AI credentials
- Ignore AI errors
Slide 1.9: Knowledge Check
Q1: Can you paste customer emails into ChatGPT?
- A) Yes, if work-related
- B) No, never use PII in public AI ✓
- C) Only for internal customers
Q2: Who's responsible for errors in AI-generated work you publish?
- A) The AI company
- B) You - the human who published it ✓
- C) Your manager
Q3: You find a helpful new AI tool. What do you do?
- A) Start using it
- B) Ask a colleague
- C) Request approval from AI Governance ✓
Module 2: AI Power User Training
Duration: 45 minutes | Audience: Heavy AI users | Frequency: Annual
Slide 2.1: Effective Prompting
Prompting Best Practices
| Technique | Example |
|---|
| Be Specific | "Write a 200-word summary of Q3 results for the sales team" |
| Provide Context | "As a healthcare compliance officer reviewing..." |
| Set Constraints | "Use only publicly available information" |
| Request Format | "Present as a table with columns for..." |
Security in Prompts
- Never include real names, IDs, or credentials
- Use placeholders: [CUSTOMER_NAME], [PROJECT_CODE]
- Anonymize data before input
Slide 2.2: Verifying AI Outputs
Verification Checklist
| Check | Action |
|---|
| Factual Claims | Cross-reference with authoritative sources |
| Statistics | Verify numbers against original data |
| Citations | Check that referenced sources exist |
| Logic | Ensure reasoning is sound |
| Completeness | Confirm all aspects addressed |
| Bias | Check for stereotypes or unfairness |
Red Flags
- Overly confident assertions
- Round numbers without sources
- Claims that seem "too perfect"
- Generic responses to specific questions
Slide 2.3: GenAI for Different Tasks
Task-Specific Guidelines
| Task | Best Practice | Verification |
|---|
| Writing | Edit for voice/accuracy | Fact-check claims |
| Research | Use as starting point | Verify sources |
| Analysis | Provide clean data | Validate calculations |
| Coding | Review every line | Test thoroughly |
| Translation | Check cultural context | Native speaker review |
Slide 2.4: Managing AI Conversations
Session Management
| Practice | Why |
|---|
| Start fresh for new topics | Prevents context confusion |
| Clear session after sensitive work | Reduces data exposure |
| Don't rely on AI "memory" | May not persist correctly |
| Save important outputs locally | AI history can disappear |
Data Hygiene
- Scrub PII before pasting
- Use synthetic data for examples
- Export/delete sensitive conversations
Module 3: AI Builder Training
Duration: 90 minutes | Audience: Developers, Data Scientists | Frequency: Annual + Updates
Slide 3.1: The AI Development Lifecycle
Eight Phases
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│1.INTAKE│─▶│2.CLASS-│─▶│3.ASSESS│─▶│4.BUILD │
│ │ │ IFY │ │ │ │ │
└────────┘ └────────┘ └────────┘ └────────┘
│
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│8.DECOM-│◀─│7.MONIT-│◀─│6.DEPLOY│◀─│5.GATE │
│ MISSION│ │ OR │ │ │ │ │
└────────┘ └────────┘ └────────┘ └────────┘
Your Role
- Submit intake form BEFORE starting
- Document in System Card DURING development
- Pass pre-production gate BEFORE deployment
- Monitor AFTER deployment
Slide 3.2: Risk Classification for Builders
Classification Factors
| Factor | Score 1 | Score 4 |
|---|
| Data Sensitivity | Public | PII/Health/Financial |
| Autonomy | Advisory only | Autonomous actions |
| Impact Scope | Internal only | External customers |
| Scale | <100 users | >10,000 users |
Tier = Maximum Score
| Tier | Your Obligations |
|---|
| Low | Basic documentation |
| Medium | + Security review |
| High | + AIA + Board approval |
| Critical | + Red teaming + Executive sign-off |
Slide 3.3: Secure AI Development
Code Security
| Control | Implementation |
|---|
| No hardcoded secrets | Use Vault, AWS Secrets Manager |
| Safe model serialization | Safetensors, not Pickle |
| Dependency scanning | Snyk, Dependabot for ML libs |
| Input validation | Sanitize all inputs |
| Output filtering | PII detection, toxicity check |
Model Security
| Threat | Mitigation |
|---|
| Prompt injection | Input filtering, instruction isolation |
| Model extraction | Rate limiting, access controls |
| Training data poisoning | Data validation, provenance |
| Adversarial inputs | Robustness testing |
Slide 3.4: Required Documentation
System Card (All Production AI)
| Section | Content |
|---|
| Identity | Name, version, owner, risk tier |
| Purpose | Intended use, out-of-scope uses |
| Data | Training data summary, PII handling |
| Performance | Metrics, evaluation results |
| Risks | Known risks, mitigations |
| Operations | Monitoring, incident playbook |
AI-BOM (For High-Risk+)
| Section | Content |
|---|
| Model Provenance | Origin, training compute, version |
| Dependencies | Frameworks, libraries, versions |
| Data Sources | Training data lineage |
| Third Parties | External APIs, services |
Slide 3.5: Testing Requirements
By Risk Tier
| Tier | Required Testing |
|---|
| Low | Functional testing |
| Medium | + Security scan, performance |
| High | + Bias testing, red teaming |
| Critical | + External audit, adversarial |
Bias Testing
| Metric | Definition | Threshold |
|---|
| Demographic Parity | Equal positive rates across groups | <10% difference |
| Equalized Odds | Equal TPR/FPR across groups | <5% difference |
| Disparate Impact | Ratio of selection rates | >0.8 |
Slide 3.6: Agentic AI Development
Additional Controls for Autonomous Agents
| Control | Requirement |
|---|
| Action Boundaries | Explicit whitelist of permitted actions |
| Kill Switch | Tested monthly, <30 second activation |
| Rate Limits | Token/action budgets |
| Approval Gates | Human approval for sensitive actions |
| Audit Trail | Complete action logging |
| Sandboxing | Isolated test environment |
Multi-Agent Systems
| Control | Requirement |
|---|
| Agent Registration | Unique identity per agent |
| Communication Logging | All inter-agent messages logged |
| Cascade Prevention | Circuit breakers between agents |
| Orchestrator Oversight | Central coordination point |
Slide 3.7: Pre-Production Gate
Checklist (All AI)
Additional for High-Risk
Module 4: AI Leader Training
Duration: 60 minutes | Audience: Managers, Product Owners, Stewards | Frequency: Annual
Slide 4.1: Your Governance Responsibilities
Domain Steward Accountabilities
| Responsibility | Actions |
|---|
| Inventory | Know all AI in your domain |
| Ownership | Every AI system has an assigned owner |
| Classification | Correct risk tier applied |
| Compliance | Required gates passed |
| Monitoring | Regular performance review |
| Escalation | Report incidents promptly |
Slide 4.2: Third-Party AI Management
Before Procuring AI Vendors
| Requirement | Evidence |
|---|
| Security certification | SOC 2 Type II or ISO 27001 |
| AI governance practices | Completed questionnaire |
| Data processing agreement | GDPR/CCPA compliant |
| Training data policy | Opt-out confirmation |
| IP indemnification | For GenAI vendors |
Ongoing Vendor Management
| Activity | Frequency |
|---|
| Performance review | Monthly |
| Security attestation | Annual |
| Contract compliance | Quarterly |
| Incident review | As needed |
Slide 4.3: Incident Management
Incident Severity
| Level | Definition | Response |
|---|
| P1-Critical | Safety risk, regulatory breach | 15 min escalation |
| P2-High | Major malfunction, data exposure | 1 hour escalation |
| P3-Medium | Degraded performance, bias detected | 4 hour response |
| P4-Low | Minor issues, quality concerns | 24 hour response |
Your Role in Incidents
- Assess severity immediately
- Escalate per severity level
- Document what happened
- Participate in post-incident review
- Implement corrective actions
Slide 4.4: Regulatory Awareness
Key Dates
| Date | Regulation | Impact |
|---|
| Feb 2025 | EU AI Act - Prohibited | Must not use prohibited AI |
| Aug 2025 | EU AI Act - GPAI | GPAI providers must comply |
| Aug 2026 | EU AI Act - High-Risk | Full conformity required |
Your Obligation
- Know which AI systems in your domain are high-risk
- Ensure documentation is complete
- Be ready for audit inquiries
Module 5: Advanced AI Governance
Duration: 120 minutes | Audience: AI Risk Officers, Compliance | Frequency: Annual + Updates
Slide 5.1: Framework Architecture
18-Document Structure
| Layer | Documents | Purpose |
|---|
| Governance | 01-02 | Authority, structure |
| Classification | 03-04 | Risk tiering |
| Policy | 05 | Rules, prohibitions |
| Documentation | 06-07, 13 | System records |
| Implementation | 08-09 | Execution plans |
| Compliance | 11, 14, 18 | Regulatory alignment |
| Operations | 15-16 | Ongoing management |
| Guidance | 10, 12, 17 | Stakeholder communication |
Slide 5.2: Regulatory Deep Dive
EU AI Act Compliance
| Category | Requirements | Framework Coverage |
|---|
| Prohibited (Art. 5) | 8 banned practices | Policy §3.1 |
| GPAI (Art. 51-55) | Transparency, documentation | Doc 14 |
| High-Risk (Annex III) | Conformity assessment | Docs 03, 06, 07 |
| Transparency (Art. 50) | User disclosure | Doc 05 |
NIST AI RMF 2.0 Mapping
| Function | Subcategories | Framework Coverage |
|---|
| GOVERN | 6 categories | Doc 01, 02, 05 |
| MAP | 5 categories | Doc 03, 04, 13 |
| MEASURE | 4 categories | Doc 06, 07, 16 |
| MANAGE | 4 categories | Doc 08, 15, 16 |
Slide 5.3: Audit Preparation
Evidence Repository
| Requirement | Evidence Type | Location |
|---|
| Governance | Charter, meeting minutes | [Repository] |
| Risk Assessment | Intake forms, AIAs | [Repository] |
| Documentation | System Cards, AI-BOMs | [Repository] |
| Monitoring | Dashboards, reports | [Repository] |
| Training | Completion records | [LMS] |
| Incidents | Response records | [Ticket system] |
Slide 5.4: Continuous Improvement
Metrics Dashboard
| Metric | Target | Measurement |
|---|
| Inventory coverage | 100% | AI systems / total known |
| Documentation completion | 100% | System Cards / registered AI |
| High-risk approval rate | 100% | Approved / deployed |
| Incident response time | <24h | Average MTTR |
| Training completion | >90% | Completed / required |
Review Cadence
| Activity | Frequency |
|---|
| Policy review | Annual |
| Risk matrix calibration | Annual |
| Framework assessment | Annual |
| Regulatory monitoring | Quarterly |
| Metrics review | Monthly |
Appendix: Training Resources
Quick Reference Materials
| Resource | Location |
|---|
| AI Policy | [Intranet link] |
| Approved Tools List | [Intranet link] |
| Tool Request Form | [Form link] |
| Incident Reporting | [Link] |
| System Card Template | [Link] |
| AI-BOM Template | [Link] |
| Role | Contact |
|---|
| AI Risk Officer | [Email] |
| AI Governance Team | ai-governance@company.com |
| Incident Hotline | [Phone/Slack] |
| Training Support | [LMS help] |
Training Completion Certification
Acknowledgment
By completing this training, I acknowledge:
Signature: ________________________
Date: ________________________
Employee ID: ________________________
Modules Completed: ________________________
Document Version: 1.0
Training Owner: AI Governance Team
Next Review: [Date + 1 year]
CODITECT AI Risk Management Framework
Document ID: AI-RMF-20 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework.
For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework
Last Updated: 2026-01-15
Owner: AZ1.AI Inc. | Lead: Hal Casteel