Skip to main content

Worked Examples and Sample Templates

Practical Implementation Guidance


Document Purpose

This document provides complete, filled-out examples of key AI governance templates to help practitioners understand how to apply the framework in real-world scenarios. Each example includes:

  • A realistic use case scenario
  • Complete template with sample data
  • Commentary on key decisions
  • Common pitfalls to avoid

Example Index

ExampleUse CaseRisk LevelEU AI Act
Example AHR Resume ScreeningHighHigh-Risk (Employment)
Example BCustomer Support ChatbotMediumLimited Risk
Example CFraud DetectionHighHigh-Risk (Financial)
Example DInternal Knowledge AssistantLowMinimal Risk

Example A: HR Resume Screening AI

Scenario Description

ACME Corp wants to deploy an AI system to screen job applications for software engineering positions. The system will:

  • Parse resumes and extract qualifications
  • Score candidates against job requirements
  • Rank candidates for human recruiter review
  • Flag potential concerns for closer examination

This is a High-Risk use case under EU AI Act (Employment, recruitment).


A.1 Completed AI Intake & Registration Form

FieldResponse
Project NameTalentScreen AI
Inventory IDAI-2026-HR-001
Submission Date2026-01-16
Business Unit / DepartmentHuman Resources

Problem Statement: Manual resume screening takes an average of 23 minutes per application. With 500+ applications per role, recruiters spend excessive time on initial screening, delaying time-to-hire and missing qualified candidates due to fatigue.

AI Solution Description: TalentScreen AI uses NLP to extract skills, experience, and qualifications from resumes and scores candidates against predefined job requirements. Human recruiters review all AI-flagged candidates and make final decisions.

Project Stage: ☑ Pre-Production / Testing

Key Roles:

RoleNameEmailDepartment
Business OwnerSarah Chensarah.chen@acme.comHR
Technical LeadMarcus Johnsonmarcus.j@acme.comEngineering
Project ManagerLisa Parklisa.park@acme.comHR Operations
Data OwnerJames Wrightjames.w@acme.comPeople Analytics

Source of Model: ☑ Hybrid (Fine-tuning vendor model with company data)

Model Category: ☑ Predictive/Classification, ☑ NLP

Vendor Details:

FieldResponse
Vendor NameResumeAI Corp
Product/Service NameResumeAI Enterprise
Contract StatusIn negotiation
Data Processing LocationUS (AWS us-east-1)
IP Indemnification Provided?Yes
Zero Data Retention Clause?Yes
SOC 2 / ISO 27001 Certified?Yes (SOC 2 Type II)

Data Sensitivity: ☑ Confidential (PII: names, emails, education, work history)

Impact of Failure: ☑ Financial/Legal (Discrimination risk, regulatory fine, qualified candidate rejection)

Failure Scenarios:

  • AI systematically underscores candidates from certain universities (proxy discrimination)
  • Gender-biased scoring due to historical hiring data patterns
  • Technical skill mismatch leading to wrong candidates advancing

Level of Autonomy: ☑ Human-in-the-Loop (Recruiter reviews and approves all shortlists)

User Impact Scope: ☑ External Customers (Job applicants)

Scale of Deployment:

FieldResponse
Estimated number of users50 recruiters, 10,000+ applicants/year
Estimated decisions/predictions per day200 resume scores
Geographic scopeNational (US, with EU expansion planned)

EU AI Act High-Risk Category: ☑ Employment (recruitment, screening)

Other Regulatory Requirements: ☑ CCPA/CPRA (California applicants), ☑ NYC Local Law 144 (automated employment decision tools)


A.2 Completed AI System Card

Document Control

FieldResponse
Document StatusUnder Review
Associated Registry IDAI-2026-HR-001
Last Updated2026-01-16
Versionv0.9.0 (Pre-production)
Framework AlignmentNIST AI RMF 2.0, EU AI Act Article 11

1. System Identity

FieldResponse
System NameTalentScreen AI
Version1.0.0-beta
Model TypeFine-tuned LLM + XGBoost Ensemble
Business OwnerSarah Chen (VP, Talent Acquisition)
Technical OwnerMarcus Johnson (Sr. ML Engineer)
Development Date2025-11-01
Production DatePlanned: 2026-03-01
Risk TierHigh
EU AI Act ClassificationHigh-Risk (Employment)

2. Intended Use & Limitations

2.1 Intended Use Cases

FieldDescription
Primary TaskScore and rank job applications for software engineering roles
Target AudienceACME Corp recruiters and hiring managers
Deployment EnvironmentInternal HR systems, ATS integration
Expected Frequency~200 resume evaluations per business day

2.2 Out-of-Scope Use Cases (Anti-Patterns)

Prohibited UseReason
Automatic rejection without human reviewLegal requirement for human oversight
Executive or leadership rolesInsufficient training data, higher stakes
Roles outside software engineeringModel trained specifically for tech roles
Final hiring decisionHuman must make all employment decisions

2.3 Limitations

Limitation TypeDescription
Knowledge CutoffTraining data: 2020-2025 applications
Language SupportEnglish only
Input ConstraintsPDF/DOCX resumes, max 10 pages
Environmental FactorsPerformance degrades with non-standard formats
Known WeaknessesCareer changers, non-linear career paths, international credentials

3. Data Lineage & Privacy

3.1 Training Data

FieldResponse
Data SourcesHistorical ACME applications (2020-2025), anonymized public job descriptions
Data Collection Period2020-01-01 to 2025-09-30
Data Volume45,000 resumes, 12,000 hiring outcomes
Preprocessing StepsPII tokenization, format standardization, deduplication

3.2 Sensitive Data Assessment

Data CategoryPresentMitigation
PII (Names, emails, addresses)YesTokenized before model inference, encrypted at rest
SPI (Health/Finance)NoExplicitly filtered from training
IP (Intellectual Property)NoN/A
Biometric DataNoPhotos stripped from resumes

3.3 Data Rights

QuestionResponse
Legal right to use data for training?Yes (candidate consent in application terms)
Commercial use permitted by license?Yes
Copyright compliance verified?Yes
Training data summary published? (GPAI)N/A (not GPAI)

4. Performance Metrics

MetricValueThresholdStatus
Precision (qualified candidates)0.87≥0.80✅ Pass
Recall (not missing qualified)0.92≥0.85✅ Pass
F1 Score0.89≥0.82✅ Pass
False Positive Rate0.15≤0.20✅ Pass
False Negative Rate0.08≤0.15✅ Pass

5. Fairness & Bias Assessment

5.1 Demographic Parity Analysis

Protected GroupSelection RateAdverse Impact RatioStatus
Gender: Male24.3%ReferenceN/A
Gender: Female22.1%0.91✅ Pass (≥0.80)
Gender: Non-binary23.8%0.98✅ Pass
Ethnicity: White23.5%ReferenceN/A
Ethnicity: Asian25.1%1.07✅ Pass
Ethnicity: Black21.2%0.90✅ Pass
Ethnicity: Hispanic22.8%0.97✅ Pass
Age: 20-2925.2%ReferenceN/A
Age: 30-3924.1%0.96✅ Pass
Age: 40-4922.3%0.89✅ Pass
Age: 50+20.1%0.80⚠️ Monitor

5.2 Bias Mitigation Measures

MeasureImplementation
Name/Gender MaskingNames replaced with tokens during inference
University De-weightingElite university names masked, focus on skills
Gap Penalty RemovalEmployment gaps not penalized in scoring
Calibration by GroupScore distributions equalized across demographics

6. Human Oversight

ControlImplementation
Review Requirement100% of AI recommendations reviewed by recruiter
Override CapabilityRecruiter can override any AI score
Escalation PathBias concerns → HR Director → Legal
Audit TrailAll decisions logged with justification

7. Risk Assessment Summary

RiskLikelihoodImpactMitigationResidual
Discriminatory scoringMediumHighBias testing, masking, calibrationLow
Data breachLowHighEncryption, access controls, auditingLow
Model driftMediumMediumMonthly performance monitoringLow
Adversarial gamingLowLowInput validation, anomaly detectionLow

8. Approvals

RoleNameDateStatus
Business OwnerSarah Chen2026-01-10Approved
Technical ReviewMarcus Johnson2026-01-12Approved
Legal ReviewAmanda Foster2026-01-14Pending
AI GovernanceAI Governance OfficePendingPending
HR ComplianceJennifer Liu2026-01-15Approved

A.3 Key Decisions & Commentary

Why High-Risk Classification? Employment decisions directly impact people's livelihoods and are explicitly listed in EU AI Act Annex III. Even though humans make final decisions, the AI's ranking significantly influences outcomes.

Why Human-in-the-Loop (not Human-on-the-Loop)? Given the high-risk nature, we require explicit human review of every AI recommendation, not just monitoring. Recruiters must actively confirm or override each shortlist.

Bias Testing Approach: We used the 4/5ths rule (80% rule) from EEOC guidelines as our adverse impact threshold. Any group with selection rate below 80% of the highest-rate group triggers review.

Common Pitfalls Avoided:

  1. ❌ Using historical hiring outcomes without auditing for past bias
  2. ❌ Training on easily-gameable keywords rather than skill inference
  3. ❌ Allowing automatic rejections
  4. ❌ Insufficient demographic testing before deployment

Example B: Customer Support Chatbot

Scenario Description

TechSupport Inc wants to deploy an LLM-powered chatbot to handle first-line customer support for their software products. The system will:

  • Answer common product questions
  • Troubleshoot basic issues
  • Escalate complex issues to human agents
  • Not handle payments, account changes, or PII

This is a Medium Risk use case (Limited Risk under EU AI Act due to chatbot disclosure requirements).


B.1 Completed AI Intake & Registration Form

FieldResponse
Project NameSupportBot Pro
Inventory IDAI-2026-CS-001
Submission Date2026-01-16
Business Unit / DepartmentCustomer Service

Problem Statement: Support ticket volume has grown 300% while staffing remained flat. 65% of tickets are common questions with documented answers. Response time SLA is being missed.

AI Solution Description: LLM-powered chatbot using RAG to answer questions from knowledge base. Escalates to human agents when confidence is low or issue is complex. Clear disclosure that user is chatting with AI.

Project Stage: ☑ Development

Source of Model: ☑ Commercial / Vendor (Claude API)

Model Category: ☑ Generative AI (Text)

Vendor Details:

FieldResponse
Vendor NameAnthropic
Product/Service NameClaude API
Contract StatusSigned
Data Processing LocationUS
IP Indemnification Provided?No (standard API terms)
Zero Data Retention Clause?Yes (API prompts not used for training)
SOC 2 / ISO 27001 Certified?Yes

Data Sensitivity: ☑ Internal (Product documentation, FAQs)

Impact of Failure: ☑ Operational (Wrong troubleshooting steps, user frustration, escalation to human)

Level of Autonomy: ☑ Human-on-the-Loop (Bot handles routine queries; humans monitor and handle escalations)

User Impact Scope: ☑ External Customers (Support/Information)

Scale of Deployment:

FieldResponse
Estimated number of users100,000+ customers/month
Estimated decisions/predictions per day5,000 conversations
Geographic scopeInternational

EU AI Act Classification: ☑ Limited Risk (AI system interacting with persons - disclosure required)


B.2 Completed AI System Card (Summary)

FieldResponse
System NameSupportBot Pro
Risk TierMedium
EU AI Act ClassificationLimited Risk
Model TypeVendor LLM (Claude) + RAG
Human OversightHuman-on-the-Loop with escalation

Key Controls:

ControlImplementation
AI Disclosure"You're chatting with our AI assistant. Type 'human' for a person."
Confidence ThresholdEscalate if confidence < 0.7
Topic RestrictionsNo account changes, payments, or PII handling
Hallucination MitigationRAG with strict source grounding
MonitoringHuman review of 5% random sample daily

Prohibited Actions:

  • Modifying customer accounts
  • Processing refunds or payments
  • Accessing or displaying PII
  • Making promises or commitments
  • Providing legal or medical advice

B.3 Key Decisions & Commentary

Why Medium Risk (not High)? The chatbot provides informational support only. It cannot make decisions affecting customer accounts, finances, or access. All consequential actions require human intervention.

Why Limited Risk under EU AI Act? Article 50 requires disclosure when AI interacts with persons. The chatbot clearly identifies itself as AI and offers human escalation.

Guardrails Implementation:

  • Input: Block attempts to extract system prompts, PII detection
  • Output: Toxicity filtering, confidence thresholds, source citations
  • Process: Automatic escalation paths, human monitoring

Example C: Fraud Detection System

Scenario Description

SecureBank deploys an ML system to detect fraudulent transactions in real-time. The system:

  • Scores every transaction for fraud probability
  • Blocks high-confidence fraud automatically
  • Flags medium-confidence for human review
  • Operates 24/7 with sub-second latency requirements

This is a High Risk use case (financial services, automated decisions affecting access to essential services).


C.1 Completed AI System Card (Summary)

FieldResponse
System NameFraudShield ML
Risk TierHigh
EU AI Act ClassificationHigh-Risk (Essential Services - Credit/Financial)
Model TypeXGBoost Ensemble + Neural Network
Human OversightHybrid (auto-block high confidence, human review medium)

Performance Metrics:

MetricValueThreshold
True Positive Rate (Fraud Caught)94.2%≥90%
False Positive Rate (Legitimate Blocked)0.8%≤2%
Latency (p99)45ms≤100ms
Model Drift (weekly AUC delta)0.002≤0.01

Human Oversight Thresholds:

Confidence ScoreActionHuman Involvement
≥0.95Auto-blockPost-hoc review within 4 hours
0.70 - 0.95Flag + Temporary HoldHuman review within 30 minutes
<0.70AllowSampled audit (2%)

Appeal Process:

  • Customer notified of block reason
  • Appeal submitted via app/call
  • Human review within 24 hours
  • Escalation to fraud committee if needed

Key Risk Mitigations:

RiskMitigation
False positives blocking legitimate transactionsConservative thresholds, rapid appeal process
Demographic bias in fraud scoringMonthly fairness audits across demographics
Adversarial attacks (fraud pattern learning)Model ensemble, regular retraining, pattern diversity
Model decayContinuous monitoring, weekly AUC tracking, auto-alerts

Example D: Internal Knowledge Assistant

Scenario Description

GlobalCorp deploys an internal AI assistant to help employees find information in company wikis, policies, and documentation. The system:

  • Answers questions about company policies
  • Finds relevant documents
  • Summarizes meeting notes
  • Internal use only, no customer-facing

This is a Low Risk use case (internal productivity tool, no consequential decisions).


D.1 Completed AI Intake Form (Summary)

FieldResponse
Project NameCorpAssist AI
Risk TierLow
EU AI Act ClassificationMinimal Risk
Model TypeVendor LLM + RAG
Data SensitivityInternal (company policies, wikis)
Human OversightHuman-in-the-Loop (advisory only)
Impact of FailureAnnoyance (employee finds correct info manually)

Why Low Risk?

  • Internal use only
  • Advisory/informational only
  • No decisions made based on outputs
  • No access to sensitive data (HR records, financials)
  • Easy to verify answers against source documents

Simplified Controls:

ControlImplementation
Source CitationsAll answers include document links
Confidence Display"I'm not sure" for low-confidence answers
Feedback LoopThumbs up/down on answers
Access ControlInternal employees only via SSO

Template Selection Guide

Use this guide to determine which templates are required for your AI system:

Risk TierRequired TemplatesOptional Templates
LowIntake Form, Simplified System CardAI-BOM
MediumIntake Form, System Card, Basic AIAAI-BOM, Monitoring Plan
HighIntake Form, Full System Card, AIA, AI-BOMIncident Response Plan
CriticalAll templates, Enhanced AIA, Red Team ReportExternal Audit Report

Common Mistakes to Avoid

Classification Mistakes

MistakeWhy It's WrongCorrect Approach
"It's just a chatbot, so Low Risk"Chatbots can be High-Risk if they make decisionsClassify based on impact, not technology
"Humans review everything, so Low Risk"Human review doesn't change inherent riskHuman oversight is a control, not classification
"It's internal only, so no compliance needed"Internal systems can still harm employeesAssess all users, internal or external
"We use a vendor, so they're responsible"Deployers have independent obligationsBoth provider and deployer have duties

Documentation Mistakes

MistakeWhy It's WrongCorrect Approach
Generic limitations ("may not work well")Not actionableSpecific limits ("accuracy drops below 60% for languages other than English")
No bias testing for "neutral" use casesAll models can have biasTest for bias on all demographic groups
Listing features, not risksDocumentation should enable risk understandingFocus on what can go wrong and mitigations
Outdated documentationModels change, docs must tooVersion control, regular review cadence

Document History

VersionDateAuthorChanges
1.02026-01-16AI Governance OfficeInitial release with 4 examples

CODITECT AI Risk Management Framework

Document ID: AI-RMF-24 | Version: 1.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-16 Owner: AZ1.AI Inc. | Lead: Hal Casteel