Worked Examples and Sample Templates
Practical Implementation Guidance
Document Purpose
This document provides complete, filled-out examples of key AI governance templates to help practitioners understand how to apply the framework in real-world scenarios. Each example includes:
- A realistic use case scenario
- Complete template with sample data
- Commentary on key decisions
- Common pitfalls to avoid
Example Index
| Example | Use Case | Risk Level | EU AI Act |
|---|---|---|---|
| Example A | HR Resume Screening | High | High-Risk (Employment) |
| Example B | Customer Support Chatbot | Medium | Limited Risk |
| Example C | Fraud Detection | High | High-Risk (Financial) |
| Example D | Internal Knowledge Assistant | Low | Minimal Risk |
Example A: HR Resume Screening AI
Scenario Description
ACME Corp wants to deploy an AI system to screen job applications for software engineering positions. The system will:
- Parse resumes and extract qualifications
- Score candidates against job requirements
- Rank candidates for human recruiter review
- Flag potential concerns for closer examination
This is a High-Risk use case under EU AI Act (Employment, recruitment).
A.1 Completed AI Intake & Registration Form
| Field | Response |
|---|---|
| Project Name | TalentScreen AI |
| Inventory ID | AI-2026-HR-001 |
| Submission Date | 2026-01-16 |
| Business Unit / Department | Human Resources |
Problem Statement: Manual resume screening takes an average of 23 minutes per application. With 500+ applications per role, recruiters spend excessive time on initial screening, delaying time-to-hire and missing qualified candidates due to fatigue.
AI Solution Description: TalentScreen AI uses NLP to extract skills, experience, and qualifications from resumes and scores candidates against predefined job requirements. Human recruiters review all AI-flagged candidates and make final decisions.
Project Stage: ☑ Pre-Production / Testing
Key Roles:
| Role | Name | Department | |
|---|---|---|---|
| Business Owner | Sarah Chen | sarah.chen@acme.com | HR |
| Technical Lead | Marcus Johnson | marcus.j@acme.com | Engineering |
| Project Manager | Lisa Park | lisa.park@acme.com | HR Operations |
| Data Owner | James Wright | james.w@acme.com | People Analytics |
Source of Model: ☑ Hybrid (Fine-tuning vendor model with company data)
Model Category: ☑ Predictive/Classification, ☑ NLP
Vendor Details:
| Field | Response |
|---|---|
| Vendor Name | ResumeAI Corp |
| Product/Service Name | ResumeAI Enterprise |
| Contract Status | In negotiation |
| Data Processing Location | US (AWS us-east-1) |
| IP Indemnification Provided? | Yes |
| Zero Data Retention Clause? | Yes |
| SOC 2 / ISO 27001 Certified? | Yes (SOC 2 Type II) |
Data Sensitivity: ☑ Confidential (PII: names, emails, education, work history)
Impact of Failure: ☑ Financial/Legal (Discrimination risk, regulatory fine, qualified candidate rejection)
Failure Scenarios:
- AI systematically underscores candidates from certain universities (proxy discrimination)
- Gender-biased scoring due to historical hiring data patterns
- Technical skill mismatch leading to wrong candidates advancing
Level of Autonomy: ☑ Human-in-the-Loop (Recruiter reviews and approves all shortlists)
User Impact Scope: ☑ External Customers (Job applicants)
Scale of Deployment:
| Field | Response |
|---|---|
| Estimated number of users | 50 recruiters, 10,000+ applicants/year |
| Estimated decisions/predictions per day | 200 resume scores |
| Geographic scope | National (US, with EU expansion planned) |
EU AI Act High-Risk Category: ☑ Employment (recruitment, screening)
Other Regulatory Requirements: ☑ CCPA/CPRA (California applicants), ☑ NYC Local Law 144 (automated employment decision tools)
A.2 Completed AI System Card
Document Control
| Field | Response |
|---|---|
| Document Status | Under Review |
| Associated Registry ID | AI-2026-HR-001 |
| Last Updated | 2026-01-16 |
| Version | v0.9.0 (Pre-production) |
| Framework Alignment | NIST AI RMF 2.0, EU AI Act Article 11 |
1. System Identity
| Field | Response |
|---|---|
| System Name | TalentScreen AI |
| Version | 1.0.0-beta |
| Model Type | Fine-tuned LLM + XGBoost Ensemble |
| Business Owner | Sarah Chen (VP, Talent Acquisition) |
| Technical Owner | Marcus Johnson (Sr. ML Engineer) |
| Development Date | 2025-11-01 |
| Production Date | Planned: 2026-03-01 |
| Risk Tier | High |
| EU AI Act Classification | High-Risk (Employment) |
2. Intended Use & Limitations
2.1 Intended Use Cases
| Field | Description |
|---|---|
| Primary Task | Score and rank job applications for software engineering roles |
| Target Audience | ACME Corp recruiters and hiring managers |
| Deployment Environment | Internal HR systems, ATS integration |
| Expected Frequency | ~200 resume evaluations per business day |
2.2 Out-of-Scope Use Cases (Anti-Patterns)
| Prohibited Use | Reason |
|---|---|
| Automatic rejection without human review | Legal requirement for human oversight |
| Executive or leadership roles | Insufficient training data, higher stakes |
| Roles outside software engineering | Model trained specifically for tech roles |
| Final hiring decision | Human must make all employment decisions |
2.3 Limitations
| Limitation Type | Description |
|---|---|
| Knowledge Cutoff | Training data: 2020-2025 applications |
| Language Support | English only |
| Input Constraints | PDF/DOCX resumes, max 10 pages |
| Environmental Factors | Performance degrades with non-standard formats |
| Known Weaknesses | Career changers, non-linear career paths, international credentials |
3. Data Lineage & Privacy
3.1 Training Data
| Field | Response |
|---|---|
| Data Sources | Historical ACME applications (2020-2025), anonymized public job descriptions |
| Data Collection Period | 2020-01-01 to 2025-09-30 |
| Data Volume | 45,000 resumes, 12,000 hiring outcomes |
| Preprocessing Steps | PII tokenization, format standardization, deduplication |
3.2 Sensitive Data Assessment
| Data Category | Present | Mitigation |
|---|---|---|
| PII (Names, emails, addresses) | Yes | Tokenized before model inference, encrypted at rest |
| SPI (Health/Finance) | No | Explicitly filtered from training |
| IP (Intellectual Property) | No | N/A |
| Biometric Data | No | Photos stripped from resumes |
3.3 Data Rights
| Question | Response |
|---|---|
| Legal right to use data for training? | Yes (candidate consent in application terms) |
| Commercial use permitted by license? | Yes |
| Copyright compliance verified? | Yes |
| Training data summary published? (GPAI) | N/A (not GPAI) |
4. Performance Metrics
| Metric | Value | Threshold | Status |
|---|---|---|---|
| Precision (qualified candidates) | 0.87 | ≥0.80 | ✅ Pass |
| Recall (not missing qualified) | 0.92 | ≥0.85 | ✅ Pass |
| F1 Score | 0.89 | ≥0.82 | ✅ Pass |
| False Positive Rate | 0.15 | ≤0.20 | ✅ Pass |
| False Negative Rate | 0.08 | ≤0.15 | ✅ Pass |
5. Fairness & Bias Assessment
5.1 Demographic Parity Analysis
| Protected Group | Selection Rate | Adverse Impact Ratio | Status |
|---|---|---|---|
| Gender: Male | 24.3% | Reference | N/A |
| Gender: Female | 22.1% | 0.91 | ✅ Pass (≥0.80) |
| Gender: Non-binary | 23.8% | 0.98 | ✅ Pass |
| Ethnicity: White | 23.5% | Reference | N/A |
| Ethnicity: Asian | 25.1% | 1.07 | ✅ Pass |
| Ethnicity: Black | 21.2% | 0.90 | ✅ Pass |
| Ethnicity: Hispanic | 22.8% | 0.97 | ✅ Pass |
| Age: 20-29 | 25.2% | Reference | N/A |
| Age: 30-39 | 24.1% | 0.96 | ✅ Pass |
| Age: 40-49 | 22.3% | 0.89 | ✅ Pass |
| Age: 50+ | 20.1% | 0.80 | ⚠️ Monitor |
5.2 Bias Mitigation Measures
| Measure | Implementation |
|---|---|
| Name/Gender Masking | Names replaced with tokens during inference |
| University De-weighting | Elite university names masked, focus on skills |
| Gap Penalty Removal | Employment gaps not penalized in scoring |
| Calibration by Group | Score distributions equalized across demographics |
6. Human Oversight
| Control | Implementation |
|---|---|
| Review Requirement | 100% of AI recommendations reviewed by recruiter |
| Override Capability | Recruiter can override any AI score |
| Escalation Path | Bias concerns → HR Director → Legal |
| Audit Trail | All decisions logged with justification |
7. Risk Assessment Summary
| Risk | Likelihood | Impact | Mitigation | Residual |
|---|---|---|---|---|
| Discriminatory scoring | Medium | High | Bias testing, masking, calibration | Low |
| Data breach | Low | High | Encryption, access controls, auditing | Low |
| Model drift | Medium | Medium | Monthly performance monitoring | Low |
| Adversarial gaming | Low | Low | Input validation, anomaly detection | Low |
8. Approvals
| Role | Name | Date | Status |
|---|---|---|---|
| Business Owner | Sarah Chen | 2026-01-10 | Approved |
| Technical Review | Marcus Johnson | 2026-01-12 | Approved |
| Legal Review | Amanda Foster | 2026-01-14 | Pending |
| AI Governance | AI Governance Office | Pending | Pending |
| HR Compliance | Jennifer Liu | 2026-01-15 | Approved |
A.3 Key Decisions & Commentary
Why High-Risk Classification? Employment decisions directly impact people's livelihoods and are explicitly listed in EU AI Act Annex III. Even though humans make final decisions, the AI's ranking significantly influences outcomes.
Why Human-in-the-Loop (not Human-on-the-Loop)? Given the high-risk nature, we require explicit human review of every AI recommendation, not just monitoring. Recruiters must actively confirm or override each shortlist.
Bias Testing Approach: We used the 4/5ths rule (80% rule) from EEOC guidelines as our adverse impact threshold. Any group with selection rate below 80% of the highest-rate group triggers review.
Common Pitfalls Avoided:
- ❌ Using historical hiring outcomes without auditing for past bias
- ❌ Training on easily-gameable keywords rather than skill inference
- ❌ Allowing automatic rejections
- ❌ Insufficient demographic testing before deployment
Example B: Customer Support Chatbot
Scenario Description
TechSupport Inc wants to deploy an LLM-powered chatbot to handle first-line customer support for their software products. The system will:
- Answer common product questions
- Troubleshoot basic issues
- Escalate complex issues to human agents
- Not handle payments, account changes, or PII
This is a Medium Risk use case (Limited Risk under EU AI Act due to chatbot disclosure requirements).
B.1 Completed AI Intake & Registration Form
| Field | Response |
|---|---|
| Project Name | SupportBot Pro |
| Inventory ID | AI-2026-CS-001 |
| Submission Date | 2026-01-16 |
| Business Unit / Department | Customer Service |
Problem Statement: Support ticket volume has grown 300% while staffing remained flat. 65% of tickets are common questions with documented answers. Response time SLA is being missed.
AI Solution Description: LLM-powered chatbot using RAG to answer questions from knowledge base. Escalates to human agents when confidence is low or issue is complex. Clear disclosure that user is chatting with AI.
Project Stage: ☑ Development
Source of Model: ☑ Commercial / Vendor (Claude API)
Model Category: ☑ Generative AI (Text)
Vendor Details:
| Field | Response |
|---|---|
| Vendor Name | Anthropic |
| Product/Service Name | Claude API |
| Contract Status | Signed |
| Data Processing Location | US |
| IP Indemnification Provided? | No (standard API terms) |
| Zero Data Retention Clause? | Yes (API prompts not used for training) |
| SOC 2 / ISO 27001 Certified? | Yes |
Data Sensitivity: ☑ Internal (Product documentation, FAQs)
Impact of Failure: ☑ Operational (Wrong troubleshooting steps, user frustration, escalation to human)
Level of Autonomy: ☑ Human-on-the-Loop (Bot handles routine queries; humans monitor and handle escalations)
User Impact Scope: ☑ External Customers (Support/Information)
Scale of Deployment:
| Field | Response |
|---|---|
| Estimated number of users | 100,000+ customers/month |
| Estimated decisions/predictions per day | 5,000 conversations |
| Geographic scope | International |
EU AI Act Classification: ☑ Limited Risk (AI system interacting with persons - disclosure required)
B.2 Completed AI System Card (Summary)
| Field | Response |
|---|---|
| System Name | SupportBot Pro |
| Risk Tier | Medium |
| EU AI Act Classification | Limited Risk |
| Model Type | Vendor LLM (Claude) + RAG |
| Human Oversight | Human-on-the-Loop with escalation |
Key Controls:
| Control | Implementation |
|---|---|
| AI Disclosure | "You're chatting with our AI assistant. Type 'human' for a person." |
| Confidence Threshold | Escalate if confidence < 0.7 |
| Topic Restrictions | No account changes, payments, or PII handling |
| Hallucination Mitigation | RAG with strict source grounding |
| Monitoring | Human review of 5% random sample daily |
Prohibited Actions:
- Modifying customer accounts
- Processing refunds or payments
- Accessing or displaying PII
- Making promises or commitments
- Providing legal or medical advice
B.3 Key Decisions & Commentary
Why Medium Risk (not High)? The chatbot provides informational support only. It cannot make decisions affecting customer accounts, finances, or access. All consequential actions require human intervention.
Why Limited Risk under EU AI Act? Article 50 requires disclosure when AI interacts with persons. The chatbot clearly identifies itself as AI and offers human escalation.
Guardrails Implementation:
- Input: Block attempts to extract system prompts, PII detection
- Output: Toxicity filtering, confidence thresholds, source citations
- Process: Automatic escalation paths, human monitoring
Example C: Fraud Detection System
Scenario Description
SecureBank deploys an ML system to detect fraudulent transactions in real-time. The system:
- Scores every transaction for fraud probability
- Blocks high-confidence fraud automatically
- Flags medium-confidence for human review
- Operates 24/7 with sub-second latency requirements
This is a High Risk use case (financial services, automated decisions affecting access to essential services).
C.1 Completed AI System Card (Summary)
| Field | Response |
|---|---|
| System Name | FraudShield ML |
| Risk Tier | High |
| EU AI Act Classification | High-Risk (Essential Services - Credit/Financial) |
| Model Type | XGBoost Ensemble + Neural Network |
| Human Oversight | Hybrid (auto-block high confidence, human review medium) |
Performance Metrics:
| Metric | Value | Threshold |
|---|---|---|
| True Positive Rate (Fraud Caught) | 94.2% | ≥90% |
| False Positive Rate (Legitimate Blocked) | 0.8% | ≤2% |
| Latency (p99) | 45ms | ≤100ms |
| Model Drift (weekly AUC delta) | 0.002 | ≤0.01 |
Human Oversight Thresholds:
| Confidence Score | Action | Human Involvement |
|---|---|---|
| ≥0.95 | Auto-block | Post-hoc review within 4 hours |
| 0.70 - 0.95 | Flag + Temporary Hold | Human review within 30 minutes |
| <0.70 | Allow | Sampled audit (2%) |
Appeal Process:
- Customer notified of block reason
- Appeal submitted via app/call
- Human review within 24 hours
- Escalation to fraud committee if needed
Key Risk Mitigations:
| Risk | Mitigation |
|---|---|
| False positives blocking legitimate transactions | Conservative thresholds, rapid appeal process |
| Demographic bias in fraud scoring | Monthly fairness audits across demographics |
| Adversarial attacks (fraud pattern learning) | Model ensemble, regular retraining, pattern diversity |
| Model decay | Continuous monitoring, weekly AUC tracking, auto-alerts |
Example D: Internal Knowledge Assistant
Scenario Description
GlobalCorp deploys an internal AI assistant to help employees find information in company wikis, policies, and documentation. The system:
- Answers questions about company policies
- Finds relevant documents
- Summarizes meeting notes
- Internal use only, no customer-facing
This is a Low Risk use case (internal productivity tool, no consequential decisions).
D.1 Completed AI Intake Form (Summary)
| Field | Response |
|---|---|
| Project Name | CorpAssist AI |
| Risk Tier | Low |
| EU AI Act Classification | Minimal Risk |
| Model Type | Vendor LLM + RAG |
| Data Sensitivity | Internal (company policies, wikis) |
| Human Oversight | Human-in-the-Loop (advisory only) |
| Impact of Failure | Annoyance (employee finds correct info manually) |
Why Low Risk?
- Internal use only
- Advisory/informational only
- No decisions made based on outputs
- No access to sensitive data (HR records, financials)
- Easy to verify answers against source documents
Simplified Controls:
| Control | Implementation |
|---|---|
| Source Citations | All answers include document links |
| Confidence Display | "I'm not sure" for low-confidence answers |
| Feedback Loop | Thumbs up/down on answers |
| Access Control | Internal employees only via SSO |
Template Selection Guide
Use this guide to determine which templates are required for your AI system:
| Risk Tier | Required Templates | Optional Templates |
|---|---|---|
| Low | Intake Form, Simplified System Card | AI-BOM |
| Medium | Intake Form, System Card, Basic AIA | AI-BOM, Monitoring Plan |
| High | Intake Form, Full System Card, AIA, AI-BOM | Incident Response Plan |
| Critical | All templates, Enhanced AIA, Red Team Report | External Audit Report |
Common Mistakes to Avoid
Classification Mistakes
| Mistake | Why It's Wrong | Correct Approach |
|---|---|---|
| "It's just a chatbot, so Low Risk" | Chatbots can be High-Risk if they make decisions | Classify based on impact, not technology |
| "Humans review everything, so Low Risk" | Human review doesn't change inherent risk | Human oversight is a control, not classification |
| "It's internal only, so no compliance needed" | Internal systems can still harm employees | Assess all users, internal or external |
| "We use a vendor, so they're responsible" | Deployers have independent obligations | Both provider and deployer have duties |
Documentation Mistakes
| Mistake | Why It's Wrong | Correct Approach |
|---|---|---|
| Generic limitations ("may not work well") | Not actionable | Specific limits ("accuracy drops below 60% for languages other than English") |
| No bias testing for "neutral" use cases | All models can have bias | Test for bias on all demographic groups |
| Listing features, not risks | Documentation should enable risk understanding | Focus on what can go wrong and mitigations |
| Outdated documentation | Models change, docs must too | Version control, regular review cadence |
Document History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2026-01-16 | AI Governance Office | Initial release with 4 examples |
CODITECT AI Risk Management Framework
Document ID: AI-RMF-24 | Version: 1.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-16 Owner: AZ1.AI Inc. | Lead: Hal Casteel