Skip to main content

Finance Industry Appendix

AI Governance for Financial Services Applications


Document Control

FieldDetails
Document TypeIndustry-Specific Appendix
Parent DocumentsAI Governance Framework (Docs 01-23)
Applies ToAI systems in banking, insurance, investment management, payments
Regulatory ScopeSR 11-7, OCC 2011-12, SEC, FINRA, EU AI Act, DORA
Version1.0

1. Overview

1.1 Purpose

This appendix extends the AI Governance Framework for organizations developing or deploying AI systems in financial services. It provides additional controls, documentation requirements, and regulatory mappings specific to banking, insurance, securities, and payments environments.

1.2 Scope

In ScopeRegulatory Framework
Credit Decisioning (Lending, Underwriting)SR 11-7, ECOA, Fair Lending
Fraud DetectionBSA/AML, OFAC
Algorithmic TradingSEC Rule 15c3-5, MiFID II
Insurance Underwriting/ClaimsState Insurance Regulations, EU AI Act
Wealth/Investment ManagementSEC, FINRA, DOL Fiduciary
Customer Service AIUDAP/UDAAP, Fair Lending
Risk ManagementBasel III/IV, SR 11-7
AML/KYC AutomationBSA, FATF Guidelines

1.3 EU AI Act Financial Services Classification

The EU AI Act designates several financial services AI applications as High-Risk:

Use CaseEU AI Act ReferenceRisk Classification
Creditworthiness assessmentAnnex III, 5(b)High-Risk
Credit scoringAnnex III, 5(b)High-Risk
Insurance premium/claims evaluationAnnex III, 5(b)High-Risk
Emergency services prioritizationAnnex III, 5(a)High-Risk
Fraud detection (customer-impacting)Assessment requiredPotentially High-Risk

2. SR 11-7 / OCC Model Risk Management

2.1 Overview

Federal Reserve SR 11-7 and OCC 2011-12 establish model risk management (MRM) requirements for banking organizations. AI/ML systems are explicitly within scope.

2.2 Model Risk Management Framework Mapping

SR 11-7 ElementAI Governance ControlFramework Reference
Model DevelopmentSystem Card, AI-BOMDocs 06, 13
Model ValidationAIA, Testing StandardsDoc 07, GenAI Addendum
Model ImplementationPre-Production GateOperating Model §7
Model UseHuman Oversight, MonitoringDocs 01, 16
Model InventoryAI Inventory RegistrationDoc 04
Model DocumentationTechnical DocumentationDocs 06, 13
Ongoing MonitoringContinuous MonitoringDoc 16
Governance & ControlsOperating Model, PolicyDocs 01, 05

2.3 AI-Specific MRM Controls

┌─────────────────────────────────────────────────────────────────┐
│ SR 11-7 MODEL RISK MANAGEMENT │
│ FOR AI/ML SYSTEMS │
└─────────────────────────────────────────────────────────────────┘

┌─────────────────────────┼─────────────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ MODEL │ │ MODEL │ │ ONGOING │
│ DEVELOPMENT │ │ VALIDATION │ │ MONITORING │
│ │ │ │ │ │
│ • Data │ │ • Independent │ │ • Drift │
│ quality │ │ review │ │ detection │
│ • Algorithm │ │ • Challenger │ │ • Outcome │
│ selection │ │ models │ │ analysis │
│ • Feature │ │ • Back-testing │ │ • Bias │
│ engineering│ │ • Sensitivity │ │ monitoring│
│ • Training │ │ analysis │ │ • Retrain │
│ governance│ │ • Bias testing │ │ triggers │
└─────────────┘ └─────────────────┘ └─────────────┘

2.4 Model Tiering for Financial Services

Model TierCriteriaValidation FrequencyDocumentation
Tier 1 (Critical)Credit decisions >$10M, Trading algorithms, Capital modelsAnnual + event-drivenFull MRM package
Tier 2 (High)Credit scoring, Fraud detection, Pricing modelsAnnualFull documentation
Tier 3 (Medium)Marketing models, Forecasting, Operational models18-24 monthsStandard documentation
Tier 4 (Low)Internal tools, Non-decision models24-36 monthsSimplified documentation

3. Fair Lending & ECOA Compliance

3.1 Overview

The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit discrimination in credit decisions. AI systems used in lending must demonstrate non-discrimination.

3.2 Prohibited Bases for Credit Decisions

Protected ClassRegulationAI Testing Required
RaceECOA, FHADisparate impact analysis
ColorECOA, FHADisparate impact analysis
ReligionECOADisparate impact analysis
National OriginECOA, FHADisparate impact analysis
Sex/GenderECOA, FHADisparate impact analysis
Marital StatusECOADisparate impact analysis
AgeECOADisparate impact analysis
Public AssistanceECOADisparate impact analysis
Familial StatusFHADisparate impact analysis
DisabilityFHADisparate impact analysis

3.3 Fair Lending Testing Framework

Test TypeDescriptionThresholdFrequency
Disparate TreatmentDirect discrimination in model featuresZero tolerancePre-deployment + annual
Disparate ImpactUnintentional discriminatory outcomes80% rule (4/5ths)Quarterly
Proxy AnalysisVariables correlated with protected classesRemove or justifyPre-deployment
Adverse ActionExplainability of denialsClear reasons requiredEvery decision

3.4 Disparate Impact Analysis Template

┌─────────────────────────────────────────────────────────────────┐
│ DISPARATE IMPACT ANALYSIS │
│ Credit Decision Model │
└─────────────────────────────────────────────────────────────────┘

Model: [Credit Scoring Model v2.1]
Analysis Period: [Q4 2025]
Analyst: [Name]

APPROVAL RATE BY DEMOGRAPHIC GROUP:

| Group | Applications | Approvals | Rate | Ratio vs. Highest |
|-----------------|-------------|-----------|--------|-------------------|
| White | 10,000 | 6,500 | 65.0% | 1.00 (Reference) |
| Black | 3,000 | 1,620 | 54.0% | 0.83 ✅ |
| Hispanic | 2,500 | 1,500 | 60.0% | 0.92 ✅ |
| Asian | 2,000 | 1,400 | 70.0% | 1.08 ✅ |

Threshold: ≥0.80 (80% of highest-rate group)
Status: ✅ PASS / ❌ FAIL / ⚠️ MONITOR

ADVERSE ACTION REASONS (Top 5):
1. Debt-to-income ratio exceeds threshold
2. Credit history length insufficient
3. Recent delinquencies
4. High credit utilization
5. Insufficient income documentation

3.5 Adverse Action Requirements

For AI-driven credit decisions:

RequirementImplementation
Specific reasons for denialExplainable AI outputs mapping to reason codes
Principal reasonsTop 4-5 factors influencing decision
No prohibited factorsAudit trail proving protected classes not used
Consumer accessRight to receive explanation
Appeal processHuman review pathway

4. Anti-Money Laundering (AML/BSA)

4.1 AI in AML Programs

AML FunctionAI ApplicationRegulatory Consideration
Transaction MonitoringAnomaly detection, Pattern recognitionSAR filing accuracy
Customer Due DiligenceRisk scoring, Beneficial ownershipCDD rule compliance
Sanctions ScreeningName matching, Entity resolutionOFAC compliance
Case ManagementAlert prioritization, Investigation assistBSA program requirements

4.2 AI Model Governance for AML

ControlSR 11-7 RequirementAI-Specific Implementation
Threshold TuningDocumented rationaleML-optimized thresholds with explainability
Alert SuppressionRisk-based approachAI prioritization with human validation
False Positive MgmtEfficiency monitoringPrecision/recall tracking
Coverage AnalysisRisk coverage assessmentModel coverage validation

4.3 Suspicious Activity Reporting

ConsiderationAI Requirement
Human DecisionSAR filing decision must involve human judgment
Audit TrailAI contribution to SAR decision documented
TimelinessAI must not delay SAR filing deadlines
ExplainabilityAI rationale for alert generation available

5. Algorithmic Trading & Securities

5.1 SEC Rule 15c3-5 (Market Access Rule)

RequirementAI Governance Control
Risk Management ControlsPre-trade risk limits, position monitoring
Financial LimitsAI cannot exceed predefined financial thresholds
Regulatory LimitsShort sale restrictions, order marking
Erroneous Order PreventionInput validation, circuit breakers

5.2 Trading Algorithm Controls

┌─────────────────────────────────────────────────────────────────┐
│ ALGORITHMIC TRADING AI CONTROLS │
└─────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────┐
│ PRE-TRADE CONTROLS │
│ • Position limits • Capital limits │
│ • Order size limits • Restricted list check │
│ • Price collar checks • Market hours validation │
└─────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────┐
│ REAL-TIME MONITORING │
│ • P&L monitoring • Exposure tracking │
│ • Message rate monitoring • Latency tracking │
│ • Kill switch activation • Manual override capability │
└─────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────┐
│ POST-TRADE CONTROLS │
│ • Trade reconciliation • P&L attribution │
│ • Compliance review • Best execution analysis │
│ • Regulatory reporting • Audit trail │
└─────────────────────────────────────────────────────────────────┘

5.3 Kill Switch Requirements

TriggerActionRecovery
P&L limit breachImmediate halt, cancel open ordersManual review, executive approval
Position limit breachHalt new orders, reduce exposureRisk review, threshold reset
System malfunctionImmediate halt, alert operationsTechnical review, testing
Market disruptionHalt trading, monitorMarket conditions review
Regulatory haltImmediate complianceRegulatory clearance

5.4 FINRA Supervision Requirements

RequirementAI Governance Control
Supervisory ReviewHuman oversight of AI trading decisions
Written ProceduresDocumented AI trading governance
Books and RecordsComplete audit trail of AI activity
Customer SuitabilityAI recommendations validated for suitability

6. Insurance AI Governance

6.1 Insurance Use Cases

Use CaseRegulationRisk Level
UnderwritingState insurance regs, EU AI ActHigh
Claims ProcessingUnfair claims practices actsHigh
Pricing/RatingState rate filing requirementsHigh
Fraud DetectionState anti-fraud lawsMedium
Customer ServiceConsumer protection lawsMedium

6.2 Unfair Discrimination Prevention

Insurance AI must avoid unfair discrimination:

Prohibited FactorTesting RequiredMitigation
RaceImpact analysisRemove proxies, calibration
ReligionImpact analysisRemove proxies
National OriginImpact analysisRemove proxies
Gender (some states/products)Varies by stateState-specific compliance
Credit-Based Insurance ScoreState-specific rulesDisclosure, consumer rights

6.3 Rate Filing Requirements

RequirementAI Documentation
Actuarial JustificationModel performance metrics, feature importance
Non-DiscriminationFair lending-style disparate impact analysis
Data SourcesComplete data lineage, AI-BOM
Model MethodologyTechnical documentation, algorithm description
Consumer DisclosurePlain-language explanation of AI use

7. Digital Operational Resilience Act (DORA)

7.1 Overview

DORA (EU Regulation 2022/2554) applies to financial entities operating in the EU and establishes ICT risk management requirements, including for AI systems.

7.2 DORA Requirements for AI

DORA ArticleRequirementAI Governance Control
Art. 5-16ICT Risk ManagementAI system risk assessment
Art. 17-23ICT Incident ReportingAI incident response procedures
Art. 24-27Digital Operational Resilience TestingAI system testing, red teaming
Art. 28-44Third-Party Risk ManagementAI vendor due diligence
Art. 45-56Information SharingAI threat intelligence

7.3 AI-Specific DORA Controls

Control AreaRequirement
Change ManagementAI model updates follow DORA change protocols
Business ContinuityAI system failover and recovery procedures
Incident ClassificationAI incidents classified per DORA taxonomy
OutsourcingAI vendor contracts include DORA provisions
TestingAnnual AI resilience testing

8. Consumer Protection (UDAP/UDAAP)

8.1 Unfair, Deceptive, Abusive Acts or Practices

AI systems must not engage in UDAP/UDAAP:

StandardAI ApplicationControl
UnfairAI decision causes substantial injuryOutcome monitoring, impact assessment
DeceptiveAI provides misleading informationOutput validation, disclosure
AbusiveAI takes unreasonable advantageVulnerability detection, human oversight

8.2 AI Chatbot UDAP Considerations

RiskMitigation
Inaccurate informationRAG with verified sources, confidence thresholds
Undisclosed AI interactionClear disclosure "You are chatting with AI"
Inability to reach humanClear escalation path, "type HUMAN for agent"
Privacy violationsPII detection, data minimization

9. Financial Services AI Documentation Requirements

9.1 Model Documentation Checklist

ElementSR 11-7EU AI ActFair Lending
Business purposeRequiredRequiredRequired
Data sourcesRequiredRequiredRequired
Algorithm descriptionRequiredRequiredRequired
Performance metricsRequiredRequiredRequired
LimitationsRequiredRequiredRequired
Validation resultsRequiredRequiredRequired
Bias/fairness testingBest practiceRequired (high-risk)Required
ExplainabilityBest practiceRequired (high-risk)Required
Monitoring planRequiredRequiredRequired
Change managementRequiredRequiredRequired

9.2 Validation Documentation

Validation ElementDescriptionFrequency
Conceptual SoundnessTheory, methodology reviewDevelopment + major changes
Data QualityInput data assessmentQuarterly
Developmental EvidenceBack-testing, out-of-sampleAnnual
Outcome AnalysisPredicted vs. actualMonthly
BenchmarkingChallenger model comparisonAnnual
Sensitivity AnalysisParameter stabilityAnnual
Stress TestingExtreme scenario performanceAnnual

10. Regulatory Examination Preparation

10.1 Common Examination Focus Areas

AreaExaminer QuestionsDocumentation Ready
Model Inventory"Show me all AI models in production"Inventory with risk tiers
Validation"Show me validation for [model]"Validation reports, findings
Fair Lending"Demonstrate non-discrimination"Disparate impact analyses
Governance"Who approved this model?"Approval documentation
Monitoring"How do you detect model degradation?"Monitoring dashboards
Incidents"What incidents have occurred?"Incident log, remediation

10.2 Examination Preparation Checklist

30 Days Before:

  • Inventory all AI models, verify risk classifications
  • Compile validation reports for all Tier 1/2 models
  • Gather fair lending analyses for credit models
  • Prepare governance organization chart
  • Review model risk management policy currency

7 Days Before:

  • Brief key stakeholders on AI model inventory
  • Prepare demonstration environment
  • Stage documentation in examination room
  • Identify subject matter experts for each model
  • Review recent incidents and remediation

During Examination:

  • Document all examiner requests
  • Provide timely, accurate responses
  • Escalate concerns to management immediately
  • Track open items and deadlines

11. Document History

VersionDateAuthorChanges
1.02026-01-16AI Governance OfficeInitial release

CODITECT AI Risk Management Framework

Document ID: AI-RMF-25 | Version: 1.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-16 Owner: AZ1.AI Inc. | Lead: Hal Casteel