Skip to main content

Third-Party AI Risk Management Standard

Document Type: Enterprise Standard
Framework Alignment: NIST AI RMF 2.0, EU AI Act, ISO/IEC 42001 Annex A.5
Effective Date: 2026-01-15
Version: 1.0


1. Purpose and Scope

1.1 Purpose

This standard establishes requirements for managing risks associated with third-party AI components, services, and vendors. It ensures that external AI dependencies receive appropriate due diligence, monitoring, and governance throughout their lifecycle.

1.2 Scope

This standard applies to:

  • Third-party AI APIs (OpenAI, Anthropic, Google, etc.)
  • Embedded AI features in SaaS products
  • Open-source AI models and frameworks
  • AI consulting and development services
  • Data labeling and annotation services
  • AI infrastructure providers (GPU cloud, model serving)

1.3 Regulatory Drivers

RegulationRequirementReference
NIST AI RMF 2.0Third-party model assessmentMAP 1.5, MANAGE 2.3
EU AI ActValue chain responsibilitiesArticle 25
EU AI ActGPAI downstream obligationsArticle 53(1)(b)
ISO/IEC 42001Third-party managementAnnex A.5.4

2. Third-Party AI Classification

2.1 Classification Categories

CategoryDefinitionExamplesRisk Level
API ServicesAI capabilities accessed via APIOpenAI, Anthropic, Google AIHigh
Embedded AIAI features within SaaS productsSalesforce Einstein, ServiceNowMedium-High
Open Source ModelsCommunity-developed AI modelsLlama, Mistral, Stable DiffusionMedium
AI InfrastructureComputing and serving platformsAWS Bedrock, Azure AIMedium
Data ServicesTraining data and labelingScale AI, LabelboxMedium-High
Development ServicesAI consulting and developmentCustom development vendorsHigh

2.2 Risk Tier Assignment

Third-Party Risk TierCriteriaDue Diligence Level
CriticalCore business dependency; handles sensitive data; GPAI providerFull assessment + continuous monitoring
HighSignificant business function; some sensitive dataEnhanced assessment + periodic monitoring
MediumSupporting function; limited data exposureStandard assessment + annual review
LowMinimal impact; no sensitive dataSimplified assessment

3. Due Diligence Requirements

3.1 Pre-Engagement Assessment

3.1.1 Vendor Assessment Questionnaire

Section A: Company Information

  • Legal entity name and jurisdiction
  • Company size and financial stability
  • AI-specific certifications (ISO/IEC 42001, SOC 2)
  • Insurance coverage (AI/ML specific)

Section B: AI Governance

  • AI ethics policy and governance structure
  • Responsible AI practices documentation
  • Incident response procedures
  • Model change management process

Section C: Technical Capabilities

  • Model documentation availability
  • API versioning and deprecation policy
  • SLA and uptime commitments
  • Scalability and performance guarantees

Section D: Security

  • Security certifications (SOC 2, ISO 27001)
  • Data encryption (at rest, in transit)
  • Access control mechanisms
  • Penetration testing frequency

Section E: Data Handling

  • Data processing locations
  • Data retention policies
  • Training data usage (opt-out available?)
  • Sub-processor disclosure

Section F: Compliance

  • EU AI Act compliance status
  • GDPR/privacy compliance
  • Industry-specific certifications
  • Audit rights provision

3.2 Assessment Scoring Matrix

CategoryWeightScore (1-5)Weighted Score
AI Governance20%
Security Posture25%
Data Handling20%
Regulatory Compliance20%
Technical Capability15%
Total100%

Scoring Thresholds:

  • ≥4.0: Approved for all tiers
  • 3.0-3.9: Approved with conditions
  • 2.0-2.9: Enhanced monitoring required
  • <2.0: Not approved

3.3 Documentation Requirements by Tier

DocumentCriticalHighMediumLow
Vendor Assessment✓ Full✓ Full✓ Standard✓ Simplified
AI-BOM Entry
Security ReviewOptional-
Legal ReviewOptional-
Privacy Impact AssessmentIf PII--
Model DocumentationSummary-
Training Data Summary✓ (if GPAI)✓ (if GPAI)--

4. Contract Requirements

4.1 Mandatory Contract Clauses

All Third-Party AI Contracts Must Include:

ClausePurposeMinimum Standard
AI DisclosureTransparencyVendor must disclose all AI components
Data UsageData protectionNo use of our data for model training without explicit consent
IP IndemnificationLegal protectionVendor indemnifies for IP infringement claims
Audit RightsAssuranceRight to audit or receive audit reports
Incident NotificationRisk management24-48 hour notification for AI incidents
Subprocessor NotificationSupply chainNotification of AI subprocessor changes
Documentation AccessComplianceAccess to model documentation on request
Exit ProvisionsBusiness continuityData portability, transition assistance

4.2 EU AI Act Specific Clauses

For GPAI Providers:

  • Obligation to provide downstream documentation (Article 53(1)(b))
  • Training data summary access
  • Systemic risk notification commitment
  • Compliance attestation

For High-Risk AI Components:

  • Conformity assessment documentation
  • Technical documentation access
  • Human oversight support
  • Traceability requirements

4.3 SLA Requirements

MetricCritical TierHigh TierMedium Tier
Uptime99.9%99.5%99.0%
Response Time (P95)<200ms<500ms<1000ms
Incident Response1 hour4 hours24 hours
Model Update Notice30 days14 days7 days
Support Hours24/7Business hours + on-callBusiness hours

5. Ongoing Monitoring

5.1 Continuous Monitoring Requirements

Monitoring AreaFrequencyMethodOwner
Service AvailabilityReal-timeAPI monitoringPlatform Team
Performance MetricsDailySLA dashboardPlatform Team
Security IncidentsContinuousVendor notifications + newsSecurity Team
Model ChangesOn notificationChange reviewAI Governance
Compliance UpdatesQuarterlyVendor attestationCompliance
Financial StabilityAnnualCredit check, newsProcurement

5.2 Model Drift and Performance Monitoring

For AI API Services:

  • Monitor output quality metrics
  • Track response time trends
  • Log and analyze error rates
  • Compare outputs against baselines

Monitoring Dashboard Requirements:

  • API call volumes and costs
  • Error rates by endpoint
  • Latency distribution
  • Quality score trends

5.3 Risk Event Triggers

EventRisk LevelAction Required
Vendor security breachCriticalImmediate assessment, potential suspension
Model deprecation noticeHighMigration planning within 30 days
Terms of service changeMediumLegal review within 14 days
Pricing changeMediumCost impact analysis
Acquisition/mergerHighVendor reassessment
Regulatory enforcement actionCriticalCompliance review, potential exit

6. Vendor Management Lifecycle

6.1 Lifecycle Stages

┌─────────────────────────────────────────────────────────────┐
│ 1. IDENTIFICATION │
│ • Business need definition │
│ • Market research │
│ • Shortlist candidates │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ 2. ASSESSMENT │
│ • Vendor questionnaire │
│ • Security review │
│ • Legal review │
│ • Technical evaluation │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ 3. CONTRACTING │
│ • Contract negotiation │
│ • Required clauses inclusion │
│ • SLA definition │
│ • Approval workflow │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ 4. ONBOARDING │
│ • Technical integration │
│ • AI-BOM creation │
│ • Monitoring setup │
│ • Team training │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ 5. ONGOING MANAGEMENT │
│ • Continuous monitoring │
│ • Periodic reassessment │
│ • Incident management │
│ • Relationship management │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ 6. OFFBOARDING │
│ • Exit planning │
│ • Data retrieval/deletion │
│ • Alternative sourcing │
│ • Documentation archive │
└─────────────────────────────────────────────────────────────┘

6.2 Reassessment Schedule

TierFull ReassessmentSecurity ReviewCompliance Check
CriticalAnnualQuarterlyQuarterly
HighAnnualSemi-annualSemi-annual
MediumBiennialAnnualAnnual
LowOn renewalOn renewalOn renewal

7. Open Source AI Model Requirements

7.1 Open Source Assessment Criteria

CriterionAssessmentMinimum Standard
License ComplianceLegal reviewCompatible with commercial use
ProvenanceDocumentationClear model lineage
SecurityVulnerability scanNo known critical CVEs
Community SupportActivity assessmentActive maintenance
Training DataTransparencyKnown data sources

7.2 Open Source AI Checklist

Before Adoption:

  • License reviewed and approved by Legal
  • Model provenance documented
  • Security scan completed (no critical vulnerabilities)
  • Hash/checksum verified against source
  • Training data sources understood
  • AI-BOM entry created
  • Maintenance responsibility assigned

Ongoing:

  • Monitor for security advisories
  • Track version updates
  • Maintain local copies with verified hashes
  • Periodic re-evaluation (annual minimum)

7.3 Model Download and Verification

# Example: Model verification procedure
# 1. Download model from official source
wget https://official-source/model-v1.0.safetensors

# 2. Verify hash against published checksum
sha256sum model-v1.0.safetensors
# Compare output with official checksum

# 3. Document in AI-BOM
# Model: model-v1.0
# Source: official-source
# Download Date: YYYY-MM-DD
# SHA-256: [verified hash]

8. Incident Response

8.1 Third-Party AI Incident Types

Incident TypeSeverityResponse Time
Vendor data breach involving our dataCritical1 hour
Model producing harmful outputsHigh4 hours
Service outage (critical dependency)High1 hour
Model significant behavior changeMedium24 hours
Pricing/terms changeLow7 days

8.2 Incident Response Procedure

Immediate (0-4 hours):

  1. Assess impact on our systems
  2. Activate alternative/backup if available
  3. Notify internal stakeholders
  4. Document incident details

Short-term (4-48 hours):

  1. Obtain vendor incident report
  2. Assess ongoing risk
  3. Implement compensating controls
  4. Communication to affected parties

Long-term (post-incident):

  1. Conduct root cause analysis
  2. Update vendor risk assessment
  3. Review contract adequacy
  4. Implement preventive measures

9. SMB Implementation Guide

9.1 Simplified Assessment

For SMBs, a simplified vendor assessment approach:

Tier 1 Check (5 minutes):

  • Vendor has privacy policy
  • Vendor has security page/certifications
  • Terms of service reviewed
  • Pricing understood

Tier 2 Check (30 minutes):

  • Security certifications verified (SOC 2, ISO 27001)
  • Data processing agreement in place
  • API documentation reviewed
  • Support availability confirmed

Tier 3 Check (2-4 hours):

  • Full vendor questionnaire
  • Legal review of contract
  • Technical integration assessment
  • Reference checks

9.2 SMB Vendor Tracking

Simple spreadsheet approach:

VendorServiceTierContract EndLast ReviewOwner

10. Enterprise Integration

10.1 GRC Integration

Third-party AI risk management should integrate with:

  • Vendor Risk Management (VRM) system
  • GRC platform (ServiceNow, Archer, OneTrust)
  • Contract management system
  • Asset management / CMDB

10.2 Process Integration

ProcessThird-Party AI Touchpoint
ProcurementVendor assessment before PO
Security ReviewAI-specific security questionnaire
Legal ReviewAI contract clause checklist
Architecture ReviewAI-BOM requirements
Vendor Review BoardAI risk tier consideration

11. Metrics and Reporting

11.1 Key Metrics

MetricTargetFrequency
% vendors with current assessment100%Monthly
Average vendor risk score≥3.5Quarterly
Overdue reassessments0Monthly
Incidents from third-party AIMinimizeMonthly
Contract compliance rate100%Quarterly

11.2 Reporting Requirements

Monthly Report:

  • New third-party AI onboarded
  • Vendor risk score changes
  • Incidents and resolutions
  • Upcoming reassessments

Quarterly Report:

  • Third-party AI inventory summary
  • Risk distribution by tier
  • Compliance status
  • Cost trends

Document Control

Version History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release

Approvals

RoleNameDate
AI Risk Officer
Procurement Lead
CISO
Legal Counsel

Appendix A: Vendor Assessment Questionnaire Template

[Full questionnaire available as separate attachment]

Appendix B: Contract Clause Library

[Standard AI contract clauses available as separate attachment]

Appendix C: Approved Vendor List

[Internal list maintained separately with access controls]


CODITECT AI Risk Management Framework

Document ID: AI-RMF-15 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel