Skip to main content

AI Bill of Materials (AI-BOM) Template

Document Type: Technical Standard / Template
Framework Alignment: SPDX 3.0, CycloneDX 1.6, OWASP AIBOM, NIST AI RMF 2.0, EU AI Act
Effective Date: 2026-01-15
Version: 1.0


1. Purpose and Scope

1.1 Purpose

This AI Bill of Materials (AI-BOM) template provides a comprehensive inventory framework for documenting all components, dependencies, and artifacts that comprise an AI system. The AI-BOM enables:

  • Transparency: Complete visibility into AI system composition
  • Traceability: Chain of custody from data to deployment
  • Security: Vulnerability identification and supply chain risk management
  • Compliance: EU AI Act, NIST AI RMF, and ISO/IEC 42001 alignment
  • Auditability: Evidence for regulatory and internal audits

1.2 Scope

This template applies to:

  • Internal AI/ML models (trained, fine-tuned, custom)
  • Third-party AI components (APIs, SDKs, embedded models)
  • Foundation models and GPAI (General Purpose AI)
  • Training and evaluation datasets
  • AI infrastructure and runtime environments

1.3 Regulatory Requirements

RegulationAI-BOM RequirementReference
EU AI ActTechnical documentation for high-risk AIArticle 11
EU AI ActTraining data summary for GPAIArticle 53(1)(d)
NIST AI RMF 2.0Model provenance and supply chainMAP 1.5, MANAGE 2.3
ISO/IEC 42001AI asset inventoryClause 8.2, Annex A.5

2. AI-BOM Header Information

2.1 Document Metadata

FieldValue
AI-BOM ID[Unique identifier, e.g., AIBOM-2026-001]
AI System Name[System name]
Version[AI-BOM version]
Created Date[YYYY-MM-DD]
Last Updated[YYYY-MM-DD]
Author[Name/Team]
Owner[System Owner]
Classification[Public / Internal / Confidential / Restricted]
Risk Tier[Low / Medium / High / Critical]

2.2 AI System Identification

FieldValue
System Registry ID[Internal inventory ID]
System Type[ ] Predictive/Classification [ ] Generative [ ] Recommender [ ] Computer Vision [ ] NLP [ ] Agentic
Deployment Status[ ] Development [ ] Staging [ ] Production [ ] Deprecated
Primary Function[Brief description of system purpose]
Business Domain[Healthcare / Finance / HR / Operations / etc.]

3. Model Components

3.1 Primary Model(s)

FieldModel 1Model 2Model N
Model Name
Model ID/Version
Model Type
Architecture
Provider/Source[ ] Internal [ ] Vendor [ ] Open-Source
Provider Name
License
Training Compute (FLOPS)
Parameter Count
Knowledge Cutoff
Download/Access Date
Hash/Checksum (SHA-256)
Verification Status[ ] Verified [ ] Pending [ ] N/A

3.2 Model Provenance Chain

For each model, document the complete chain of custody:

┌─────────────────────────────────────────────────────────────┐
│ FOUNDATION MODEL │
│ Name: [e.g., Llama-3-70B] │
│ Provider: [e.g., Meta] │
│ License: [e.g., Llama 3 Community License] │
│ Access Date: [YYYY-MM-DD] │
│ Hash: [SHA-256] │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ FINE-TUNED MODEL │
│ Name: [e.g., Company-Llama-Finance-v1] │
│ Fine-tuning Date: [YYYY-MM-DD] │
│ Fine-tuning Dataset: [Reference to Section 4] │
│ Training Framework: [e.g., PyTorch 2.1, HuggingFace] │
│ Hyperparameters: [Link to config file] │
│ Hash: [SHA-256] │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ DEPLOYED MODEL │
│ Deployment ID: [Registry ID] │
│ Deployment Date: [YYYY-MM-DD] │
│ Environment: [Production/Staging] │
│ Endpoint: [API endpoint or service name] │
│ Version Tag: [e.g., v1.2.3] │
└─────────────────────────────────────────────────────────────┘

3.3 GPAI Model Classification (EU AI Act)

Applies to General Purpose AI models only:

Assessment CriteriaValueNotes
Training Compute[FLOPS value]≥10²³ FLOPS = GPAI
Systemic Risk Threshold[FLOPS value]≥10²⁵ FLOPS = Systemic Risk
GPAI Classification[ ] Not GPAI [ ] Standard GPAI [ ] Systemic Risk GPAI
AI Office Notification Required[ ] Yes [ ] NoRequired for systemic risk
Code of Practice Signatory[ ] Yes [ ] NoVoluntary compliance path

4. Dataset Components

4.1 Training Datasets

Dataset IDNameTypeSourceLicenseSizePIICollection Date
DS-001[ ] Proprietary [ ] Public [ ] Synthetic[ ] Yes [ ] No
DS-002
DS-003

4.2 Dataset Provenance

For each training dataset, document:

FieldDS-001DS-002
Original Source
Collection Method
Collection Date Range
Data Processing Applied
Anonymization Method
Quality Assessment
Bias Assessment
Right to Use Verification[ ] Verified [ ] Pending
Copyright Compliance[ ] Compliant [ ] Needs Review
robots.txt Respected[ ] Yes [ ] No [ ] N/A

4.3 Training Data Summary (EU AI Act GPAI Requirement)

Required for GPAI models (Article 53(1)(d)):

Summary ElementDescription
General Description[High-level description of training corpus]
Data Categories[Types of data included: text, images, code, etc.]
Data Sources[Categories of sources: web crawl, licensed, proprietary]
Geographic Coverage[Regions/languages represented]
Temporal Coverage[Date range of source data]
Notable Exclusions[What was explicitly excluded]
Synthetic Data Proportion[% synthetic vs. natural data]
Copyright Approach[How copyright compliance was achieved]

5. Software Dependencies

5.1 ML Frameworks and Libraries

ComponentVersionLicenseSourceCVE StatusLast Checked
PyTorch
TensorFlow
HuggingFace Transformers
LangChain
scikit-learn
NumPy
Pandas
[Other]

5.2 Third-Party AI Services

ServiceProviderAPI VersionContract IDData ProcessingRetention Policy
[ ] Processes [ ] Stores [ ] Trains

5.3 Infrastructure Components

ComponentProviderVersion/SKURegionSecurity Certification
Compute
Storage
Model Serving
Vector DB
Monitoring

6. Runtime Configuration

6.1 Model Serving Configuration

ParameterValueRationale
Max Tokens (Input)
Max Tokens (Output)
Temperature
Top-P
Frequency Penalty
Presence Penalty
System Prompt[Link to version-controlled prompt]
Safety Filters[ ] Enabled [ ] Disabled

6.2 Guardrails Configuration

Guardrail TypeImplementationThresholdAction
Input PII Detection
Input Injection Detection
Output Toxicity Filter
Output PII Redaction
Token Rate Limiting
Cost Rate Limiting

7. Agentic AI Components (If Applicable)

7.1 Agent Configuration

FieldValue
Agent Type[ ] Single Agent [ ] Multi-Agent [ ] Orchestrated
Orchestrator[If multi-agent, specify orchestrator]
Action Boundary[Permitted action types]
Kill Switch[ ] Implemented [ ] Tested [ ] Documented

7.2 Tool Access

Tool NameTypeAccess LevelRate LimitAudit Logging
[ ] Read [ ] Write [ ] Execute[ ] Yes [ ] No

7.3 Multi-Agent Communication

Agent IDRoleCommunication ProtocolMessage Logging
[ ] Yes [ ] No

8. Security and Vulnerability Assessment

8.1 Known Vulnerabilities

ComponentCVE IDSeverityStatusMitigation
[ ] Critical [ ] High [ ] Medium [ ] Low[ ] Patched [ ] Mitigated [ ] Accepted

8.2 Security Assessments

Assessment TypeDateResultReport Link
Threat Model
Red Team Test
Penetration Test
Model Extraction Test
Data Leakage Test

8.3 Supply Chain Risk Assessment

Risk CategoryAssessmentMitigation
Model Poisoning[ ] Low [ ] Medium [ ] High
Data Poisoning[ ] Low [ ] Medium [ ] High
Dependency Vulnerability[ ] Low [ ] Medium [ ] High
Third-Party Service Risk[ ] Low [ ] Medium [ ] High
License Compliance Risk[ ] Low [ ] Medium [ ] High

9. Compliance Attestations

9.1 Regulatory Compliance

RegulationApplicableStatusEvidence Location
EU AI Act (High-Risk)[ ] Yes [ ] No[ ] Compliant [ ] In Progress [ ] N/A
EU AI Act (GPAI)[ ] Yes [ ] No[ ] Compliant [ ] In Progress [ ] N/A
NIST AI RMF[ ] Yes [ ] No[ ] Aligned [ ] In Progress [ ] N/A
ISO/IEC 42001[ ] Yes [ ] No[ ] Certified [ ] In Progress [ ] N/A
FDA 21 CFR Part 11[ ] Yes [ ] No[ ] Validated [ ] In Progress [ ] N/A
HIPAA[ ] Yes [ ] No[ ] Compliant [ ] In Progress [ ] N/A
SOC 2[ ] Yes [ ] No[ ] Certified [ ] In Progress [ ] N/A

9.2 Internal Governance

ControlStatusEvidence
Inventory Registration[ ] Complete
Risk Tiering[ ] Complete
System Card[ ] Complete
AIA (if High-Risk)[ ] Complete [ ] N/A
Pre-Production Gate[ ] Passed
Monitoring Active[ ] Yes

10. Change History

VersionDateAuthorChanges
1.0Initial AI-BOM

11. Approvals

RoleNameDateSignature
System Owner
Security Review
AI Governance

Appendix A: AI-BOM Generation Checklist

A.1 Minimum Required Fields (All AI Systems)

  • AI-BOM ID and metadata
  • Primary model identification
  • Model provenance (origin, license)
  • Training dataset sources
  • Software dependencies with versions
  • Security assessment status
  • Governance approvals

A.2 Additional Fields (High-Risk AI)

  • Complete model provenance chain
  • Dataset provenance and bias assessment
  • Full CVE assessment
  • Red team results
  • Compliance attestations

A.3 Additional Fields (GPAI)

  • Training compute (FLOPS)
  • Systemic risk classification
  • Training data summary (public)
  • Copyright compliance documentation
  • AI Office notification (if applicable)

A.4 Additional Fields (Agentic AI)

  • Agent configuration
  • Tool access inventory
  • Action boundaries
  • Kill switch documentation
  • Multi-agent communication map

Appendix B: SPDX 3.0 AI Profile Mapping

This AI-BOM aligns with SPDX 3.0 AI and Dataset profiles:

SPDX FieldAI-BOM Section
AI:modelTypeSection 3.1
AI:trainingEnergySection 3.3 (compute)
AI:informationAboutTrainingSection 4.3
AI:limitationSection 3.2 (System Card)
Dataset:datasetTypeSection 4.1
Dataset:dataCollectionProcessSection 4.2
Dataset:intendedUseSection 4.2

Document Classification: [Internal / Confidential]
Next Review Date: 2027-01-15


CODITECT AI Risk Management Framework

Document ID: AI-RMF-13 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel