Finance Industry Appendix
AI Governance for Financial Services Applications
Document Control
| Field | Details |
|---|
| Document Type | Industry-Specific Appendix |
| Parent Documents | AI Governance Framework (Docs 01-23) |
| Applies To | AI systems in banking, insurance, investment management, payments |
| Regulatory Scope | SR 11-7, OCC 2011-12, SEC, FINRA, EU AI Act, DORA |
| Version | 1.0 |
1. Overview
1.1 Purpose
This appendix extends the AI Governance Framework for organizations developing or deploying AI systems in financial services. It provides additional controls, documentation requirements, and regulatory mappings specific to banking, insurance, securities, and payments environments.
1.2 Scope
| In Scope | Regulatory Framework |
|---|
| Credit Decisioning (Lending, Underwriting) | SR 11-7, ECOA, Fair Lending |
| Fraud Detection | BSA/AML, OFAC |
| Algorithmic Trading | SEC Rule 15c3-5, MiFID II |
| Insurance Underwriting/Claims | State Insurance Regulations, EU AI Act |
| Wealth/Investment Management | SEC, FINRA, DOL Fiduciary |
| Customer Service AI | UDAP/UDAAP, Fair Lending |
| Risk Management | Basel III/IV, SR 11-7 |
| AML/KYC Automation | BSA, FATF Guidelines |
1.3 EU AI Act Financial Services Classification
The EU AI Act designates several financial services AI applications as High-Risk:
| Use Case | EU AI Act Reference | Risk Classification |
|---|
| Creditworthiness assessment | Annex III, 5(b) | High-Risk |
| Credit scoring | Annex III, 5(b) | High-Risk |
| Insurance premium/claims evaluation | Annex III, 5(b) | High-Risk |
| Emergency services prioritization | Annex III, 5(a) | High-Risk |
| Fraud detection (customer-impacting) | Assessment required | Potentially High-Risk |
2. SR 11-7 / OCC Model Risk Management
2.1 Overview
Federal Reserve SR 11-7 and OCC 2011-12 establish model risk management (MRM) requirements for banking organizations. AI/ML systems are explicitly within scope.
2.2 Model Risk Management Framework Mapping
| SR 11-7 Element | AI Governance Control | Framework Reference |
|---|
| Model Development | System Card, AI-BOM | Docs 06, 13 |
| Model Validation | AIA, Testing Standards | Doc 07, GenAI Addendum |
| Model Implementation | Pre-Production Gate | Operating Model §7 |
| Model Use | Human Oversight, Monitoring | Docs 01, 16 |
| Model Inventory | AI Inventory Registration | Doc 04 |
| Model Documentation | Technical Documentation | Docs 06, 13 |
| Ongoing Monitoring | Continuous Monitoring | Doc 16 |
| Governance & Controls | Operating Model, Policy | Docs 01, 05 |
2.3 AI-Specific MRM Controls
┌─────────────────────────────────────────────────────────────────┐
│ SR 11-7 MODEL RISK MANAGEMENT │
│ FOR AI/ML SYSTEMS │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────────┼─────────────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ MODEL │ │ MODEL │ │ ONGOING │
│ DEVELOPMENT │ │ VALIDATION │ │ MONITORING │
│ │ │ │ │ │
│ • Data │ │ • Independent │ │ • Drift │
│ quality │ │ review │ │ detection │
│ • Algorithm │ │ • Challenger │ │ • Outcome │
│ selection │ │ models │ │ analysis │
│ • Feature │ │ • Back-testing │ │ • Bias │
│ engineering│ │ • Sensitivity │ │ monitoring│
│ • Training │ │ analysis │ │ • Retrain │
│ governance│ │ • Bias testing │ │ triggers │
└─────────────┘ └─────────────────┘ └─────────────┘
2.4 Model Tiering for Financial Services
| Model Tier | Criteria | Validation Frequency | Documentation |
|---|
| Tier 1 (Critical) | Credit decisions >$10M, Trading algorithms, Capital models | Annual + event-driven | Full MRM package |
| Tier 2 (High) | Credit scoring, Fraud detection, Pricing models | Annual | Full documentation |
| Tier 3 (Medium) | Marketing models, Forecasting, Operational models | 18-24 months | Standard documentation |
| Tier 4 (Low) | Internal tools, Non-decision models | 24-36 months | Simplified documentation |
3. Fair Lending & ECOA Compliance
3.1 Overview
The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit discrimination in credit decisions. AI systems used in lending must demonstrate non-discrimination.
3.2 Prohibited Bases for Credit Decisions
| Protected Class | Regulation | AI Testing Required |
|---|
| Race | ECOA, FHA | Disparate impact analysis |
| Color | ECOA, FHA | Disparate impact analysis |
| Religion | ECOA | Disparate impact analysis |
| National Origin | ECOA, FHA | Disparate impact analysis |
| Sex/Gender | ECOA, FHA | Disparate impact analysis |
| Marital Status | ECOA | Disparate impact analysis |
| Age | ECOA | Disparate impact analysis |
| Public Assistance | ECOA | Disparate impact analysis |
| Familial Status | FHA | Disparate impact analysis |
| Disability | FHA | Disparate impact analysis |
3.3 Fair Lending Testing Framework
| Test Type | Description | Threshold | Frequency |
|---|
| Disparate Treatment | Direct discrimination in model features | Zero tolerance | Pre-deployment + annual |
| Disparate Impact | Unintentional discriminatory outcomes | 80% rule (4/5ths) | Quarterly |
| Proxy Analysis | Variables correlated with protected classes | Remove or justify | Pre-deployment |
| Adverse Action | Explainability of denials | Clear reasons required | Every decision |
3.4 Disparate Impact Analysis Template
┌─────────────────────────────────────────────────────────────────┐
│ DISPARATE IMPACT ANALYSIS │
│ Credit Decision Model │
└─────────────────────────────────────────────────────────────────┘
Model: [Credit Scoring Model v2.1]
Analysis Period: [Q4 2025]
Analyst: [Name]
APPROVAL RATE BY DEMOGRAPHIC GROUP:
| Group | Applications | Approvals | Rate | Ratio vs. Highest |
|-----------------|-------------|-----------|--------|-------------------|
| White | 10,000 | 6,500 | 65.0% | 1.00 (Reference) |
| Black | 3,000 | 1,620 | 54.0% | 0.83 ✅ |
| Hispanic | 2,500 | 1,500 | 60.0% | 0.92 ✅ |
| Asian | 2,000 | 1,400 | 70.0% | 1.08 ✅ |
Threshold: ≥0.80 (80% of highest-rate group)
Status: ✅ PASS / ❌ FAIL / ⚠️ MONITOR
ADVERSE ACTION REASONS (Top 5):
1. Debt-to-income ratio exceeds threshold
2. Credit history length insufficient
3. Recent delinquencies
4. High credit utilization
5. Insufficient income documentation
3.5 Adverse Action Requirements
For AI-driven credit decisions:
| Requirement | Implementation |
|---|
| Specific reasons for denial | Explainable AI outputs mapping to reason codes |
| Principal reasons | Top 4-5 factors influencing decision |
| No prohibited factors | Audit trail proving protected classes not used |
| Consumer access | Right to receive explanation |
| Appeal process | Human review pathway |
4. Anti-Money Laundering (AML/BSA)
4.1 AI in AML Programs
| AML Function | AI Application | Regulatory Consideration |
|---|
| Transaction Monitoring | Anomaly detection, Pattern recognition | SAR filing accuracy |
| Customer Due Diligence | Risk scoring, Beneficial ownership | CDD rule compliance |
| Sanctions Screening | Name matching, Entity resolution | OFAC compliance |
| Case Management | Alert prioritization, Investigation assist | BSA program requirements |
4.2 AI Model Governance for AML
| Control | SR 11-7 Requirement | AI-Specific Implementation |
|---|
| Threshold Tuning | Documented rationale | ML-optimized thresholds with explainability |
| Alert Suppression | Risk-based approach | AI prioritization with human validation |
| False Positive Mgmt | Efficiency monitoring | Precision/recall tracking |
| Coverage Analysis | Risk coverage assessment | Model coverage validation |
4.3 Suspicious Activity Reporting
| Consideration | AI Requirement |
|---|
| Human Decision | SAR filing decision must involve human judgment |
| Audit Trail | AI contribution to SAR decision documented |
| Timeliness | AI must not delay SAR filing deadlines |
| Explainability | AI rationale for alert generation available |
5. Algorithmic Trading & Securities
5.1 SEC Rule 15c3-5 (Market Access Rule)
| Requirement | AI Governance Control |
|---|
| Risk Management Controls | Pre-trade risk limits, position monitoring |
| Financial Limits | AI cannot exceed predefined financial thresholds |
| Regulatory Limits | Short sale restrictions, order marking |
| Erroneous Order Prevention | Input validation, circuit breakers |
5.2 Trading Algorithm Controls
┌─────────────────────────────────────────────────────────────────┐
│ ALGORITHMIC TRADING AI CONTROLS │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ PRE-TRADE CONTROLS │
│ • Position limits • Capital limits │
│ • Order size limits • Restricted list check │
│ • Price collar checks • Market hours validation │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ REAL-TIME MONITORING │
│ • P&L monitoring • Exposure tracking │
│ • Message rate monitoring • Latency tracking │
│ • Kill switch activation • Manual override capability │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ POST-TRADE CONTROLS │
│ • Trade reconciliation • P&L attribution │
│ • Compliance review • Best execution analysis │
│ • Regulatory reporting • Audit trail │
└─────────────────────────────────────────────────────────────────┘
5.3 Kill Switch Requirements
| Trigger | Action | Recovery |
|---|
| P&L limit breach | Immediate halt, cancel open orders | Manual review, executive approval |
| Position limit breach | Halt new orders, reduce exposure | Risk review, threshold reset |
| System malfunction | Immediate halt, alert operations | Technical review, testing |
| Market disruption | Halt trading, monitor | Market conditions review |
| Regulatory halt | Immediate compliance | Regulatory clearance |
5.4 FINRA Supervision Requirements
| Requirement | AI Governance Control |
|---|
| Supervisory Review | Human oversight of AI trading decisions |
| Written Procedures | Documented AI trading governance |
| Books and Records | Complete audit trail of AI activity |
| Customer Suitability | AI recommendations validated for suitability |
6. Insurance AI Governance
6.1 Insurance Use Cases
| Use Case | Regulation | Risk Level |
|---|
| Underwriting | State insurance regs, EU AI Act | High |
| Claims Processing | Unfair claims practices acts | High |
| Pricing/Rating | State rate filing requirements | High |
| Fraud Detection | State anti-fraud laws | Medium |
| Customer Service | Consumer protection laws | Medium |
6.2 Unfair Discrimination Prevention
Insurance AI must avoid unfair discrimination:
| Prohibited Factor | Testing Required | Mitigation |
|---|
| Race | Impact analysis | Remove proxies, calibration |
| Religion | Impact analysis | Remove proxies |
| National Origin | Impact analysis | Remove proxies |
| Gender (some states/products) | Varies by state | State-specific compliance |
| Credit-Based Insurance Score | State-specific rules | Disclosure, consumer rights |
6.3 Rate Filing Requirements
| Requirement | AI Documentation |
|---|
| Actuarial Justification | Model performance metrics, feature importance |
| Non-Discrimination | Fair lending-style disparate impact analysis |
| Data Sources | Complete data lineage, AI-BOM |
| Model Methodology | Technical documentation, algorithm description |
| Consumer Disclosure | Plain-language explanation of AI use |
7. Digital Operational Resilience Act (DORA)
7.1 Overview
DORA (EU Regulation 2022/2554) applies to financial entities operating in the EU and establishes ICT risk management requirements, including for AI systems.
7.2 DORA Requirements for AI
| DORA Article | Requirement | AI Governance Control |
|---|
| Art. 5-16 | ICT Risk Management | AI system risk assessment |
| Art. 17-23 | ICT Incident Reporting | AI incident response procedures |
| Art. 24-27 | Digital Operational Resilience Testing | AI system testing, red teaming |
| Art. 28-44 | Third-Party Risk Management | AI vendor due diligence |
| Art. 45-56 | Information Sharing | AI threat intelligence |
7.3 AI-Specific DORA Controls
| Control Area | Requirement |
|---|
| Change Management | AI model updates follow DORA change protocols |
| Business Continuity | AI system failover and recovery procedures |
| Incident Classification | AI incidents classified per DORA taxonomy |
| Outsourcing | AI vendor contracts include DORA provisions |
| Testing | Annual AI resilience testing |
8. Consumer Protection (UDAP/UDAAP)
8.1 Unfair, Deceptive, Abusive Acts or Practices
AI systems must not engage in UDAP/UDAAP:
| Standard | AI Application | Control |
|---|
| Unfair | AI decision causes substantial injury | Outcome monitoring, impact assessment |
| Deceptive | AI provides misleading information | Output validation, disclosure |
| Abusive | AI takes unreasonable advantage | Vulnerability detection, human oversight |
8.2 AI Chatbot UDAP Considerations
| Risk | Mitigation |
|---|
| Inaccurate information | RAG with verified sources, confidence thresholds |
| Undisclosed AI interaction | Clear disclosure "You are chatting with AI" |
| Inability to reach human | Clear escalation path, "type HUMAN for agent" |
| Privacy violations | PII detection, data minimization |
9. Financial Services AI Documentation Requirements
9.1 Model Documentation Checklist
| Element | SR 11-7 | EU AI Act | Fair Lending |
|---|
| Business purpose | Required | Required | Required |
| Data sources | Required | Required | Required |
| Algorithm description | Required | Required | Required |
| Performance metrics | Required | Required | Required |
| Limitations | Required | Required | Required |
| Validation results | Required | Required | Required |
| Bias/fairness testing | Best practice | Required (high-risk) | Required |
| Explainability | Best practice | Required (high-risk) | Required |
| Monitoring plan | Required | Required | Required |
| Change management | Required | Required | Required |
9.2 Validation Documentation
| Validation Element | Description | Frequency |
|---|
| Conceptual Soundness | Theory, methodology review | Development + major changes |
| Data Quality | Input data assessment | Quarterly |
| Developmental Evidence | Back-testing, out-of-sample | Annual |
| Outcome Analysis | Predicted vs. actual | Monthly |
| Benchmarking | Challenger model comparison | Annual |
| Sensitivity Analysis | Parameter stability | Annual |
| Stress Testing | Extreme scenario performance | Annual |
10. Regulatory Examination Preparation
10.1 Common Examination Focus Areas
| Area | Examiner Questions | Documentation Ready |
|---|
| Model Inventory | "Show me all AI models in production" | Inventory with risk tiers |
| Validation | "Show me validation for [model]" | Validation reports, findings |
| Fair Lending | "Demonstrate non-discrimination" | Disparate impact analyses |
| Governance | "Who approved this model?" | Approval documentation |
| Monitoring | "How do you detect model degradation?" | Monitoring dashboards |
| Incidents | "What incidents have occurred?" | Incident log, remediation |
10.2 Examination Preparation Checklist
30 Days Before:
7 Days Before:
During Examination:
11. Document History
| Version | Date | Author | Changes |
|---|
| 1.0 | 2026-01-16 | AI Governance Office | Initial release |
CODITECT AI Risk Management Framework
Document ID: AI-RMF-25 | Version: 1.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework.
For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework
Last Updated: 2026-01-16
Owner: AZ1.AI Inc. | Lead: Hal Casteel