Skip to main content

AI Vendor Contract Clause Library

Standard Contractual Provisions for AI Procurement


Document Control

FieldDetails
Document TypeContract Template Library
Applies ToAll AI vendor contracts
OwnerLegal / Procurement
Version1.0
Review FrequencyAnnual or upon regulatory change

1. Purpose

This library provides standard contract clauses for AI-related vendor agreements. These clauses ensure appropriate risk allocation, regulatory compliance, and governance controls when procuring AI products and services.


2. Mandatory Clauses (All AI Contracts)

2.1 AI Disclosure and Transparency

ARTIFICIAL INTELLIGENCE DISCLOSURE

Vendor shall disclose to Customer:

(a) All AI, machine learning, or algorithmic components embedded in the
Services or Products ("AI Components");

(b) The general nature and purpose of each AI Component;

(c) Any material changes to AI Components that may affect functionality,
performance, or outputs;

(d) Known limitations and potential failure modes of AI Components;

(e) Whether AI Components are developed by Vendor or sourced from
third parties.

Vendor shall provide such disclosure in writing within thirty (30) days
of contract execution and shall update such disclosure within fifteen
(15) days of any material change.

2.2 Data Usage Restrictions

DATA USAGE RESTRICTIONS

(a) Vendor shall not use Customer Data to train, fine-tune, improve, or
develop any AI or machine learning models without Customer's prior
written consent.

(b) If Customer provides consent under (a), such consent shall be:
(i) Specific to identified use cases;
(ii) Revocable upon thirty (30) days written notice;
(iii) Subject to data anonymization requirements specified by Customer.

(c) Vendor shall implement technical controls to enforce data usage
restrictions and shall demonstrate such controls upon Customer request.

(d) Vendor shall not commingle Customer Data with data from other
customers for AI training purposes unless Customer Data is fully
anonymized and aggregated.

2.3 AI Incident Notification

AI INCIDENT NOTIFICATION

(a) Vendor shall notify Customer within twenty-four (24) hours of
discovering any AI Incident. "AI Incident" means:
(i) Unintended or harmful AI outputs affecting Customer or its users;
(ii) Security breach involving AI Components;
(iii) Significant degradation in AI performance or accuracy;
(iv) Bias or discrimination identified in AI outputs;
(v) Regulatory inquiry or enforcement action related to AI Components.

(b) Such notification shall include:
(i) Description of the incident;
(ii) Known or suspected cause;
(iii) Affected scope (users, data, systems);
(iv) Remediation steps taken or planned;
(v) Timeline for resolution.

(c) Vendor shall cooperate with Customer's incident response procedures
and provide reasonable assistance in investigating and remediating
AI Incidents at no additional cost.

2.4 Audit Rights

AI AUDIT RIGHTS

(a) Customer, or its designated third-party auditor, shall have the right
to audit Vendor's AI Components, development practices, and data
handling procedures upon reasonable notice.

(b) Audit rights include:
(i) Review of AI model documentation, including training data
sources and evaluation results;
(ii) Assessment of AI security controls;
(iii) Verification of data usage compliance;
(iv) Testing of AI outputs for bias and accuracy.

(c) Vendor shall provide reasonable access to personnel, systems, and
documentation to facilitate audits.

(d) If direct audit is not feasible due to security or confidentiality
concerns, Vendor shall provide:
(i) Third-party audit reports (SOC 2, ISO 27001, ISO 42001);
(ii) Completed security questionnaires;
(iii) AI governance documentation.

(e) Audits shall be conducted no more than [annually/semi-annually]
unless a material incident or compliance concern arises.

2.5 Subprocessor Notification

AI SUBPROCESSOR NOTIFICATION

(a) Vendor shall maintain a current list of subprocessors that process
Customer Data or provide AI Components, including:
(i) Foundation model providers (e.g., OpenAI, Anthropic, Google);
(ii) Cloud infrastructure providers;
(iii) Data processing or labeling services;
(iv) Any third party with access to Customer Data.

(b) Vendor shall provide Customer with at least thirty (30) days prior
written notice before:
(i) Adding a new AI-related subprocessor;
(ii) Changing foundation model providers;
(iii) Materially changing AI processing locations.

(c) Customer may object to proposed subprocessor changes within fifteen
(15) days. Parties shall negotiate in good faith to address concerns.
If concerns cannot be resolved, Customer may terminate affected
Services without penalty.

3. GenAI-Specific Clauses

3.1 Intellectual Property Indemnification (GenAI)

GENAI INTELLECTUAL PROPERTY INDEMNIFICATION

(a) Vendor shall defend, indemnify, and hold harmless Customer from any
claim, suit, or proceeding alleging that outputs generated by
Vendor's generative AI Services ("AI Outputs") infringe,
misappropriate, or violate any third party's intellectual property
rights, including:
(i) Copyright infringement claims;
(ii) Trademark infringement claims;
(iii) Trade secret misappropriation claims;
(iv) Patent infringement claims.

(b) This indemnification shall apply to AI Outputs generated during
normal use of the Services in accordance with documentation.

(c) Vendor's indemnification obligations are contingent upon:
(i) Customer providing prompt notice of any claim;
(ii) Customer granting Vendor sole control of defense and settlement;
(iii) Customer providing reasonable cooperation.

(d) Vendor shall have no obligation under this section for claims arising
from:
(i) Customer's modification of AI Outputs;
(ii) Customer's combination of AI Outputs with other materials;
(iii) Customer's use of AI Outputs after receiving notice of
potential infringement;
(iv) Customer's specific instructions that caused the infringement.

(e) The aggregate liability under this indemnification shall not be
subject to any limitation of liability cap in this Agreement.

3.2 Output Quality and Accuracy

GENAI OUTPUT QUALITY

(a) Customer acknowledges that generative AI outputs may contain errors,
inaccuracies, or "hallucinations" (confidently stated false
information).

(b) Vendor represents that it employs commercially reasonable measures
to minimize inaccurate outputs, including:
(i) Model evaluation and testing;
(ii) Safety filters and guardrails;
(iii) Regular model updates and improvements.

(c) Vendor shall provide documentation describing:
(i) Known limitations of AI Components;
(ii) Use cases where outputs should be verified;
(iii) Recommended human oversight practices.

(d) Customer is responsible for implementing appropriate human review
processes before relying on AI Outputs for material decisions.

(e) Vendor shall not be liable for damages arising solely from Customer's
reliance on AI Outputs without appropriate verification, except where
such reliance was specifically recommended by Vendor documentation.

3.3 Content Moderation and Safety

CONTENT MODERATION AND SAFETY

(a) Vendor shall implement and maintain content moderation controls to
prevent generation of:
(i) Illegal content;
(ii) Content promoting violence or self-harm;
(iii) Sexually explicit content involving minors;
(iv) Content violating applicable laws or regulations.

(b) Vendor shall provide Customer with configuration options to:
(i) Adjust content filtering thresholds;
(ii) Block specific content categories;
(iii) Implement custom content policies.

(c) Vendor shall maintain logging of content moderation actions and
provide reports upon Customer request.

(d) Vendor shall promptly address content moderation failures upon
Customer notification.

4. EU AI Act Compliance Clauses

4.1 GPAI Provider Obligations

EU AI ACT GPAI COMPLIANCE (For General-Purpose AI Providers)

Vendor represents and warrants that, to the extent applicable, it
complies with EU AI Act requirements for General-Purpose AI providers,
including:

(a) TECHNICAL DOCUMENTATION: Vendor maintains and shall provide upon
request technical documentation including:
(i) Model architecture and capabilities;
(ii) Training data summary;
(iii) Evaluation methodology and results;
(iv) Known limitations and risks.

(b) DOWNSTREAM PROVIDER INFORMATION: Vendor shall provide Customer
sufficient information to fulfill Customer's obligations as a
deployer of AI systems, including:
(i) Integration guidelines;
(ii) Capability descriptions;
(iii) Usage restrictions.

(c) COPYRIGHT COMPLIANCE: Vendor shall maintain a policy to comply with
Union copyright law, including:
(i) Respect for rights reservations (robots.txt);
(ii) Opt-out mechanisms for rights holders;
(iii) Documentation of copyright compliance measures.

(d) TRAINING DATA TRANSPARENCY: Vendor shall publish or provide upon
request a summary of training data content.

(e) SYSTEMIC RISK (if applicable): For GPAI models with systemic risk,
Vendor shall:
(i) Conduct and share model evaluations;
(ii) Assess and mitigate systemic risks;
(iii) Implement adequate cybersecurity measures;
(iv) Report serious incidents to the AI Office.

4.2 High-Risk AI Compliance Support

HIGH-RISK AI COMPLIANCE SUPPORT

If Customer uses Vendor's AI Components in a high-risk AI system under
EU AI Act, Vendor shall provide reasonable cooperation to support
Customer's compliance obligations, including:

(a) RISK MANAGEMENT: Information about Vendor's risk management practices
for AI Components;

(b) DATA GOVERNANCE: Documentation of data quality and governance
measures for training data;

(c) TECHNICAL DOCUMENTATION: Access to technical documentation necessary
for Customer's conformity assessment;

(d) TRANSPARENCY: Information necessary for Customer to provide adequate
transparency to end users;

(e) HUMAN OVERSIGHT: Support for implementing human oversight measures;

(f) ACCURACY AND ROBUSTNESS: Information about accuracy metrics and
robustness testing.

Vendor shall provide such cooperation at no additional charge for
standard documentation requests. Custom compliance support may be
subject to additional fees as mutually agreed.

4.3 Prohibited AI Uses

EU AI ACT PROHIBITED USES

(a) Customer shall not use Vendor's Services for any purpose prohibited
under EU AI Act Article 5, including:
(i) Social scoring systems;
(ii) Subliminal manipulation causing harm;
(iii) Exploitation of vulnerable groups;
(iv) Real-time biometric identification in public spaces
(except authorized law enforcement);
(v) Emotion recognition in workplace or educational settings
for certain purposes;
(vi) Biometric categorization inferring sensitive characteristics;
(vii) Untargeted scraping for facial recognition databases;
(viii)Risk assessments based solely on profiling.

(b) Vendor may terminate Services immediately if it becomes aware of
Customer's use for prohibited purposes.

(c) Customer shall indemnify Vendor for any claims, fines, or penalties
arising from Customer's use of Services for prohibited purposes.

5. Data Protection Clauses

5.1 AI-Specific Data Processing

AI-SPECIFIC DATA PROCESSING

In addition to standard data processing terms, the following applies
to AI processing:

(a) PROCESSING PURPOSES: Customer Data may be processed by AI Components
only for:
(i) Providing the contracted Services;
(ii) Service improvement with Customer consent;
(iii) Security and abuse prevention;
(iv) Legal compliance.

(b) DATA RETENTION:
(i) Input data (prompts): Retained for [X days] for service
delivery and abuse monitoring;
(ii) Output data: Retained for [X days] unless Customer requests
longer retention;
(iii) Training data: Customer Data not retained for training unless
explicitly consented.

(c) DATA LOCATION: AI processing shall occur only in:
[Specify approved regions/countries]

(d) DELETION: Upon contract termination or Customer request, Vendor
shall delete Customer Data from AI systems within [30 days],
except as required for legal compliance or backup retention.

5.2 AI Model Outputs and Personal Data

AI MODEL OUTPUTS AND PERSONAL DATA

(a) Customer acknowledges that AI outputs may inadvertently contain
personal data, including:
(i) Data present in Customer's input;
(ii) Data inferred or generated by the AI model.

(b) Vendor shall implement measures to minimize unintended personal data
in outputs, including:
(i) PII detection and filtering;
(ii) Output sanitization where feasible.

(c) Customer is responsible for:
(i) Reviewing outputs before use with personal data subjects;
(ii) Implementing appropriate privacy notices;
(iii) Honoring data subject rights for AI-generated content.

(d) Vendor shall assist Customer in responding to data subject requests
related to AI processing upon reasonable notice.

6. Service Level Clauses

6.1 AI-Specific SLAs

AI SERVICE LEVELS

Vendor shall meet the following service levels for AI Components:

(a) AVAILABILITY:
Service Tier: [Standard/Premium/Enterprise]
Monthly Uptime: [99.0%/99.5%/99.9%]
Measurement: Percentage of minutes AI endpoints are responsive

(b) LATENCY:
Metric: API response time
P50 Target: [X] milliseconds
P95 Target: [X] milliseconds
P99 Target: [X] milliseconds

(c) ERROR RATE:
Target: [< X%] of requests resulting in errors
Measurement: Monthly average

(d) MODEL PERFORMANCE:
Baseline metrics established at contract start
Vendor shall notify Customer if performance degrades by [>X%]

(e) SERVICE CREDITS:
Uptime < Target: [X%] credit for each [X%] below target
Maximum Credit: [X%] of monthly fees

(f) EXCLUSIONS:
(i) Scheduled maintenance (with [X days] notice);
(ii) Customer-caused issues;
(iii) Force majeure events;
(iv) Third-party service failures beyond Vendor's control.

6.2 Model Update and Deprecation

MODEL UPDATE AND DEPRECATION

(a) MINOR UPDATES: Vendor may deploy minor updates (bug fixes,
performance improvements) without notice, provided such updates
do not materially change functionality.

(b) MAJOR UPDATES: For updates that materially change AI behavior or
capabilities, Vendor shall provide:
(i) [30 days] advance notice;
(ii) Documentation of changes;
(iii) Migration guidance if applicable.

(c) MODEL DEPRECATION: Vendor shall provide [90 days] notice before
deprecating an AI model, including:
(i) Deprecation timeline;
(ii) Recommended replacement model;
(iii) Migration support.

(d) CUSTOMER OPTIONS: Upon model deprecation notice, Customer may:
(i) Migrate to replacement model;
(ii) Continue using deprecated model until end of notice period;
(iii) Terminate affected Services without penalty.

7. Liability and Insurance

7.1 AI-Specific Liability

AI LIABILITY ALLOCATION

(a) VENDOR LIABILITY: Vendor shall be liable for:
(i) Defects in AI Components caused by Vendor's negligence;
(ii) Security breaches of Vendor's AI systems;
(iii) Failure to comply with documented AI specifications;
(iv) Breach of AI-related representations and warranties.

(b) CUSTOMER LIABILITY: Customer shall be liable for:
(i) Misuse of AI Components contrary to documentation;
(ii) Failure to implement recommended human oversight;
(iii) Use of AI Components for prohibited purposes;
(iv) Decisions made based on AI outputs without appropriate review.

(c) SHARED LIABILITY: Parties shall share liability proportionally for:
(i) Bias or discrimination in AI outputs where both parties
contributed to the issue;
(ii) Third-party claims where both parties' actions contributed.

(d) AI-SPECIFIC EXCLUSIONS: Neither party shall be liable for:
(i) AI outputs that are technically accurate but unhelpful;
(ii) Business decisions made based on AI recommendations;
(iii) Subjective quality of creative AI outputs.

7.2 AI Insurance Requirements

AI INSURANCE

(a) Vendor shall maintain the following insurance coverage:
(i) Technology Errors & Omissions: $[X] per occurrence
(ii) Cyber Liability: $[X] per occurrence
(iii) AI/ML Professional Liability: $[X] per occurrence
(if available)

(b) Vendor shall provide certificates of insurance upon Customer request.

(c) Vendor shall notify Customer within [30 days] of any material change
to insurance coverage.

(d) Insurance requirements may be satisfied through:
(i) Primary policies;
(ii) Umbrella/excess policies;
(iii) Parent company policies covering Vendor.

8. Termination Clauses

8.1 AI-Specific Termination Rights

AI-SPECIFIC TERMINATION RIGHTS

In addition to standard termination rights, either party may terminate
AI-related Services upon [30 days] written notice if:

(a) REGULATORY CHANGE: A new law or regulation makes continued use of
AI Components unlawful or impractical;

(b) MATERIAL AI CHANGE: Vendor materially changes AI Components in a
way that significantly affects Customer's use case, and parties
cannot agree on acceptable alternatives within [30 days];

(c) AI INCIDENT: A serious AI Incident occurs that:
(i) Cannot be remediated within [30 days];
(ii) Exposes Customer to material regulatory or legal risk;
(iii) Materially damages Customer's reputation.

(d) COMPLIANCE FAILURE: Vendor fails to demonstrate compliance with
applicable AI regulations upon Customer's reasonable request.

Upon termination, Vendor shall provide transition assistance as
specified in standard termination provisions.

9. Clause Selection Guide

By Risk Tier

Contract TypeRequired Clauses
All AI Contracts2.1, 2.2, 2.3, 2.4, 2.5
GenAI ContractsAbove + 3.1, 3.2, 3.3
EU ScopeAbove + 4.1, 4.2, 4.3
High-Risk AIAll clauses

By Vendor Type

Vendor TypeKey Clauses
Foundation Model Provider3.1, 4.1, 6.2
SaaS with Embedded AI2.1, 2.2, 5.1
AI Development Services2.4, 7.1, 7.2
Data Labeling Services2.2, 5.1, 5.2

Document Control

VersionDateAuthorChanges
1.02025-06-15LegalInitial release

Approval:

RoleNameDate
General Counsel
AI Risk Officer
Procurement Lead

Disclaimer: These clauses are templates and should be reviewed by legal counsel before use. Adapt to specific circumstances and jurisdictional requirements.


CODITECT AI Risk Management Framework

Document ID: AI-RMF-21 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel