Third-Party AI Risk Management Standard
Document Type: Enterprise Standard
Framework Alignment: NIST AI RMF 2.0, EU AI Act, ISO/IEC 42001 Annex A.5
Effective Date: 2026-01-15
Version: 1.0
1. Purpose and Scope
1.1 Purpose
This standard establishes requirements for managing risks associated with third-party AI components, services, and vendors. It ensures that external AI dependencies receive appropriate due diligence, monitoring, and governance throughout their lifecycle.
1.2 Scope
This standard applies to:
- Third-party AI APIs (OpenAI, Anthropic, Google, etc.)
- Embedded AI features in SaaS products
- Open-source AI models and frameworks
- AI consulting and development services
- Data labeling and annotation services
- AI infrastructure providers (GPU cloud, model serving)
1.3 Regulatory Drivers
| Regulation | Requirement | Reference |
|---|---|---|
| NIST AI RMF 2.0 | Third-party model assessment | MAP 1.5, MANAGE 2.3 |
| EU AI Act | Value chain responsibilities | Article 25 |
| EU AI Act | GPAI downstream obligations | Article 53(1)(b) |
| ISO/IEC 42001 | Third-party management | Annex A.5.4 |
2. Third-Party AI Classification
2.1 Classification Categories
| Category | Definition | Examples | Risk Level |
|---|---|---|---|
| API Services | AI capabilities accessed via API | OpenAI, Anthropic, Google AI | High |
| Embedded AI | AI features within SaaS products | Salesforce Einstein, ServiceNow | Medium-High |
| Open Source Models | Community-developed AI models | Llama, Mistral, Stable Diffusion | Medium |
| AI Infrastructure | Computing and serving platforms | AWS Bedrock, Azure AI | Medium |
| Data Services | Training data and labeling | Scale AI, Labelbox | Medium-High |
| Development Services | AI consulting and development | Custom development vendors | High |
2.2 Risk Tier Assignment
| Third-Party Risk Tier | Criteria | Due Diligence Level |
|---|---|---|
| Critical | Core business dependency; handles sensitive data; GPAI provider | Full assessment + continuous monitoring |
| High | Significant business function; some sensitive data | Enhanced assessment + periodic monitoring |
| Medium | Supporting function; limited data exposure | Standard assessment + annual review |
| Low | Minimal impact; no sensitive data | Simplified assessment |
3. Due Diligence Requirements
3.1 Pre-Engagement Assessment
3.1.1 Vendor Assessment Questionnaire
Section A: Company Information
- Legal entity name and jurisdiction
- Company size and financial stability
- AI-specific certifications (ISO/IEC 42001, SOC 2)
- Insurance coverage (AI/ML specific)
Section B: AI Governance
- AI ethics policy and governance structure
- Responsible AI practices documentation
- Incident response procedures
- Model change management process
Section C: Technical Capabilities
- Model documentation availability
- API versioning and deprecation policy
- SLA and uptime commitments
- Scalability and performance guarantees
Section D: Security
- Security certifications (SOC 2, ISO 27001)
- Data encryption (at rest, in transit)
- Access control mechanisms
- Penetration testing frequency
Section E: Data Handling
- Data processing locations
- Data retention policies
- Training data usage (opt-out available?)
- Sub-processor disclosure
Section F: Compliance
- EU AI Act compliance status
- GDPR/privacy compliance
- Industry-specific certifications
- Audit rights provision
3.2 Assessment Scoring Matrix
| Category | Weight | Score (1-5) | Weighted Score |
|---|---|---|---|
| AI Governance | 20% | ||
| Security Posture | 25% | ||
| Data Handling | 20% | ||
| Regulatory Compliance | 20% | ||
| Technical Capability | 15% | ||
| Total | 100% |
Scoring Thresholds:
- ≥4.0: Approved for all tiers
- 3.0-3.9: Approved with conditions
- 2.0-2.9: Enhanced monitoring required
- <2.0: Not approved
3.3 Documentation Requirements by Tier
| Document | Critical | High | Medium | Low |
|---|---|---|---|---|
| Vendor Assessment | ✓ Full | ✓ Full | ✓ Standard | ✓ Simplified |
| AI-BOM Entry | ✓ | ✓ | ✓ | ✓ |
| Security Review | ✓ | ✓ | Optional | - |
| Legal Review | ✓ | ✓ | Optional | - |
| Privacy Impact Assessment | ✓ | If PII | - | - |
| Model Documentation | ✓ | ✓ | Summary | - |
| Training Data Summary | ✓ (if GPAI) | ✓ (if GPAI) | - | - |
4. Contract Requirements
4.1 Mandatory Contract Clauses
All Third-Party AI Contracts Must Include:
| Clause | Purpose | Minimum Standard |
|---|---|---|
| AI Disclosure | Transparency | Vendor must disclose all AI components |
| Data Usage | Data protection | No use of our data for model training without explicit consent |
| IP Indemnification | Legal protection | Vendor indemnifies for IP infringement claims |
| Audit Rights | Assurance | Right to audit or receive audit reports |
| Incident Notification | Risk management | 24-48 hour notification for AI incidents |
| Subprocessor Notification | Supply chain | Notification of AI subprocessor changes |
| Documentation Access | Compliance | Access to model documentation on request |
| Exit Provisions | Business continuity | Data portability, transition assistance |
4.2 EU AI Act Specific Clauses
For GPAI Providers:
- Obligation to provide downstream documentation (Article 53(1)(b))
- Training data summary access
- Systemic risk notification commitment
- Compliance attestation
For High-Risk AI Components:
- Conformity assessment documentation
- Technical documentation access
- Human oversight support
- Traceability requirements
4.3 SLA Requirements
| Metric | Critical Tier | High Tier | Medium Tier |
|---|---|---|---|
| Uptime | 99.9% | 99.5% | 99.0% |
| Response Time (P95) | <200ms | <500ms | <1000ms |
| Incident Response | 1 hour | 4 hours | 24 hours |
| Model Update Notice | 30 days | 14 days | 7 days |
| Support Hours | 24/7 | Business hours + on-call | Business hours |
5. Ongoing Monitoring
5.1 Continuous Monitoring Requirements
| Monitoring Area | Frequency | Method | Owner |
|---|---|---|---|
| Service Availability | Real-time | API monitoring | Platform Team |
| Performance Metrics | Daily | SLA dashboard | Platform Team |
| Security Incidents | Continuous | Vendor notifications + news | Security Team |
| Model Changes | On notification | Change review | AI Governance |
| Compliance Updates | Quarterly | Vendor attestation | Compliance |
| Financial Stability | Annual | Credit check, news | Procurement |
5.2 Model Drift and Performance Monitoring
For AI API Services:
- Monitor output quality metrics
- Track response time trends
- Log and analyze error rates
- Compare outputs against baselines
Monitoring Dashboard Requirements:
- API call volumes and costs
- Error rates by endpoint
- Latency distribution
- Quality score trends
5.3 Risk Event Triggers
| Event | Risk Level | Action Required |
|---|---|---|
| Vendor security breach | Critical | Immediate assessment, potential suspension |
| Model deprecation notice | High | Migration planning within 30 days |
| Terms of service change | Medium | Legal review within 14 days |
| Pricing change | Medium | Cost impact analysis |
| Acquisition/merger | High | Vendor reassessment |
| Regulatory enforcement action | Critical | Compliance review, potential exit |
6. Vendor Management Lifecycle
6.1 Lifecycle Stages
┌─────────────────────────────────────────────────────────────┐
│ 1. IDENTIFICATION │
│ • Business need definition │
│ • Market research │
│ • Shortlist candidates │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 2. ASSESSMENT │
│ • Vendor questionnaire │
│ • Security review │
│ • Legal review │
│ • Technical evaluation │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 3. CONTRACTING │
│ • Contract negotiation │
│ • Required clauses inclusion │
│ • SLA definition │
│ • Approval workflow │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 4. ONBOARDING │
│ • Technical integration │
│ • AI-BOM creation │
│ • Monitoring setup │
│ • Team training │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 5. ONGOING MANAGEMENT │
│ • Continuous monitoring │
│ • Periodic reassessment │
│ • Incident management │
│ • Relationship management │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 6. OFFBOARDING │
│ • Exit planning │
│ • Data retrieval/deletion │
│ • Alternative sourcing │
│ • Documentation archive │
└─────────────────────────────────────────────────────────────┘
6.2 Reassessment Schedule
| Tier | Full Reassessment | Security Review | Compliance Check |
|---|---|---|---|
| Critical | Annual | Quarterly | Quarterly |
| High | Annual | Semi-annual | Semi-annual |
| Medium | Biennial | Annual | Annual |
| Low | On renewal | On renewal | On renewal |
7. Open Source AI Model Requirements
7.1 Open Source Assessment Criteria
| Criterion | Assessment | Minimum Standard |
|---|---|---|
| License Compliance | Legal review | Compatible with commercial use |
| Provenance | Documentation | Clear model lineage |
| Security | Vulnerability scan | No known critical CVEs |
| Community Support | Activity assessment | Active maintenance |
| Training Data | Transparency | Known data sources |
7.2 Open Source AI Checklist
Before Adoption:
- License reviewed and approved by Legal
- Model provenance documented
- Security scan completed (no critical vulnerabilities)
- Hash/checksum verified against source
- Training data sources understood
- AI-BOM entry created
- Maintenance responsibility assigned
Ongoing:
- Monitor for security advisories
- Track version updates
- Maintain local copies with verified hashes
- Periodic re-evaluation (annual minimum)
7.3 Model Download and Verification
# Example: Model verification procedure
# 1. Download model from official source
wget https://official-source/model-v1.0.safetensors
# 2. Verify hash against published checksum
sha256sum model-v1.0.safetensors
# Compare output with official checksum
# 3. Document in AI-BOM
# Model: model-v1.0
# Source: official-source
# Download Date: YYYY-MM-DD
# SHA-256: [verified hash]
8. Incident Response
8.1 Third-Party AI Incident Types
| Incident Type | Severity | Response Time |
|---|---|---|
| Vendor data breach involving our data | Critical | 1 hour |
| Model producing harmful outputs | High | 4 hours |
| Service outage (critical dependency) | High | 1 hour |
| Model significant behavior change | Medium | 24 hours |
| Pricing/terms change | Low | 7 days |
8.2 Incident Response Procedure
Immediate (0-4 hours):
- Assess impact on our systems
- Activate alternative/backup if available
- Notify internal stakeholders
- Document incident details
Short-term (4-48 hours):
- Obtain vendor incident report
- Assess ongoing risk
- Implement compensating controls
- Communication to affected parties
Long-term (post-incident):
- Conduct root cause analysis
- Update vendor risk assessment
- Review contract adequacy
- Implement preventive measures
9. SMB Implementation Guide
9.1 Simplified Assessment
For SMBs, a simplified vendor assessment approach:
Tier 1 Check (5 minutes):
- Vendor has privacy policy
- Vendor has security page/certifications
- Terms of service reviewed
- Pricing understood
Tier 2 Check (30 minutes):
- Security certifications verified (SOC 2, ISO 27001)
- Data processing agreement in place
- API documentation reviewed
- Support availability confirmed
Tier 3 Check (2-4 hours):
- Full vendor questionnaire
- Legal review of contract
- Technical integration assessment
- Reference checks
9.2 SMB Vendor Tracking
Simple spreadsheet approach:
| Vendor | Service | Tier | Contract End | Last Review | Owner |
|---|---|---|---|---|---|
10. Enterprise Integration
10.1 GRC Integration
Third-party AI risk management should integrate with:
- Vendor Risk Management (VRM) system
- GRC platform (ServiceNow, Archer, OneTrust)
- Contract management system
- Asset management / CMDB
10.2 Process Integration
| Process | Third-Party AI Touchpoint |
|---|---|
| Procurement | Vendor assessment before PO |
| Security Review | AI-specific security questionnaire |
| Legal Review | AI contract clause checklist |
| Architecture Review | AI-BOM requirements |
| Vendor Review Board | AI risk tier consideration |
11. Metrics and Reporting
11.1 Key Metrics
| Metric | Target | Frequency |
|---|---|---|
| % vendors with current assessment | 100% | Monthly |
| Average vendor risk score | ≥3.5 | Quarterly |
| Overdue reassessments | 0 | Monthly |
| Incidents from third-party AI | Minimize | Monthly |
| Contract compliance rate | 100% | Quarterly |
11.2 Reporting Requirements
Monthly Report:
- New third-party AI onboarded
- Vendor risk score changes
- Incidents and resolutions
- Upcoming reassessments
Quarterly Report:
- Third-party AI inventory summary
- Risk distribution by tier
- Compliance status
- Cost trends
Document Control
Version History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
Approvals
| Role | Name | Date |
|---|---|---|
| AI Risk Officer | ||
| Procurement Lead | ||
| CISO | ||
| Legal Counsel |
Appendix A: Vendor Assessment Questionnaire Template
[Full questionnaire available as separate attachment]
Appendix B: Contract Clause Library
[Standard AI contract clauses available as separate attachment]
Appendix C: Approved Vendor List
[Internal list maintained separately with access controls]
CODITECT AI Risk Management Framework
Document ID: AI-RMF-15 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel