AI System Card / Model Card Template
Technical Transparency & Audit Record
Document Control
| Field | Response |
|---|---|
| Document Status | [Draft / Under Review / Approved / Deprecated] |
| Associated Registry ID | [e.g., AI-2025-001] |
| Last Updated | 2026-01-15 |
| Version | [e.g., v1.0.0] |
| Framework Alignment | NIST AI RMF 2.0, EU AI Act Article 11 |
1. System Identity
| Field | Response |
|---|---|
| System Name | |
| Version | |
| Model Type | [e.g., Fine-tuned LLM, Random Forest, CNN, Vendor SaaS, Agentic System] |
| Business Owner | [Name/Title] |
| Technical Owner | [Name/Title] |
| Development Date | |
| Production Date | |
| Risk Tier | [Low / Medium / High / Critical] |
| EU AI Act Classification | [Minimal / Limited / High-Risk / GPAI / Systemic Risk GPAI] |
2. Intended Use & Limitations
2.1 Intended Use Cases
| Field | Description |
|---|---|
| Primary Task | What is this model designed to do? |
| Target Audience | Who are the users? |
| Deployment Environment | Where does this system operate? |
| Expected Frequency | How often will it be used? |
2.2 Out-of-Scope Use Cases (Anti-Patterns)
What should this model NOT be used for?
| Prohibited Use | Reason |
|---|---|
2.3 Limitations
| Limitation Type | Description |
|---|---|
| Knowledge Cutoff | (For LLMs) Training data end date |
| Language Support | Supported languages |
| Input Constraints | Maximum input size, format requirements |
| Environmental Factors | Conditions affecting performance |
| Known Weaknesses | Specific areas of poor performance |
3. Data Lineage & Privacy
3.1 Training Data
| Field | Response |
|---|---|
| Data Sources | |
| Data Collection Period | |
| Data Volume | |
| Preprocessing Steps |
3.2 Sensitive Data Assessment
| Data Category | Present | Mitigation |
|---|---|---|
| PII (Personally Identifiable Information) | [Yes/No] | |
| SPI (Sensitive Personal Info - Health/Finance) | [Yes/No] | |
| IP (Intellectual Property) | [Yes/No] | |
| Biometric Data | [Yes/No] |
3.3 Data Rights
| Question | Response |
|---|---|
| Legal right to use data for training? | [Yes / No / Partial] |
| Commercial use permitted by license? | [Yes / No] |
| Copyright compliance verified? | [Yes / No] |
| Training data summary published? (GPAI) | [Yes / No / N/A] |
4. Technical Specifications
| Field | Response |
|---|---|
| Architecture | [e.g., Transformer, XGBoost, CNN] |
| Framework | [e.g., PyTorch, TensorFlow, Scikit-learn] |
| Model Size | [Parameters / File size] |
| Input Format | [e.g., Text < 4096 tokens, JSON, Image PNG] |
| Output Format | [e.g., Probability score, Text, Classification] |
| Compute Resources | [e.g., GPU type, cloud service] |
| Inference Latency (p50/p95) | |
| Throughput | [Requests per second] |
4.1 Dependencies (AI-BOM)
| Component | Version | License | CVE Status |
|---|---|---|---|
5. Performance & Evaluation
5.1 Evaluation Metrics
| Metric | Threshold | Actual Result | Pass/Fail |
|---|---|---|---|
| Accuracy / F1 Score | |||
| Precision / Recall | |||
| Latency (p95) | |||
| Throughput | |||
| Error Rate | |||
| Hallucination Rate (GenAI) | |||
| Faithfulness Score (RAG) |
5.2 Bias & Fairness Testing (Required for High Risk)
| Protected Group | Metric | Result | Within Threshold |
|---|---|---|---|
| Gender | Demographic Parity | ||
| Age | Equalized Odds | ||
| Race/Ethnicity | Disparate Impact Ratio | ||
| Other: |
Mitigation Steps Taken:
[Describe any de-biasing or mitigation applied]
5.3 Adversarial Testing / Red Teaming
| Field | Response |
|---|---|
| Test Date | |
| Testing Team | [Internal / External / Both] |
| Scenario Tested | Outcome | Remediation |
|---|---|---|
| Prompt Injection | ||
| Jailbreaking | ||
| Data Extraction | ||
| Model Extraction | ||
| Toxic Output Generation | ||
| PII Leakage |
Overall Red Team Outcome: [Passed / Failed / Conditional Pass]
6. Risk Assessment
6.1 Risk Register
| Risk | Likelihood | Impact | Mitigation Strategy | Residual Risk |
|---|---|---|---|---|
| Hallucination | ||||
| Data Leakage | ||||
| Bias/Discrimination | ||||
| Model Drift | ||||
| Security Breach | ||||
| Service Unavailability | ||||
| Regulatory Non-Compliance |
6.2 Threat Model Summary
| Threat Category | Relevant | Controls Applied |
|---|---|---|
| Prompt Injection | [Yes/No] | |
| Data Poisoning | [Yes/No] | |
| Model Extraction | [Yes/No] | |
| Adversarial Inputs | [Yes/No] | |
| Supply Chain | [Yes/No] |
7. Operational Details
7.1 Monitoring Plan
| Monitoring Type | Tool | Alert Threshold | Response SLA |
|---|---|---|---|
| Performance Drift | |||
| Bias Metrics | |||
| Error Rate | |||
| Latency | |||
| Security Events |
7.2 Human Oversight
| Field | Response |
|---|---|
| Human-in-the-loop implemented? | [Yes / No] |
| Who reviews outputs? | |
| Review frequency | |
| Override mechanism available? | [Yes / No] |
| Escalation path defined? | [Yes / No] |
7.3 Maintenance Schedule
| Activity | Frequency | Owner |
|---|---|---|
| Model Retraining | ||
| Performance Review | ||
| Bias Re-evaluation | ||
| Security Assessment |
7.4 Rollback & Recovery
| Field | Response |
|---|---|
| Rollback Plan Documented? | [Yes / No] |
| Kill Switch Available? | [Yes / No] |
| Kill Switch Tested? | [Yes / No] |
| Last Rollback Test Date | |
| Recovery Time Objective (RTO) |
8. Transparency & Explainability
8.1 Explainability Level
- Global Explanation: Feature importance, model behavior patterns available
- Local Explanation: Individual decision explanations (SHAP/LIME) available
- Counterfactual: "What would need to change" explanations available
- Black Box: Cannot explain individual outputs (requires executive waiver)
8.2 User-Facing Transparency
| Requirement | Implemented | Details |
|---|---|---|
| AI Disclosure to Users | [Yes/No] | |
| Confidence Indicators | [Yes/No] | |
| Source Citations (RAG) | [Yes/No] | |
| Limitation Warnings | [Yes/No] | |
| Appeal/Contest Process | [Yes/No] |
9. Governance Approvals
| Role | Name | Date | Signature |
|---|---|---|---|
| Model Owner | |||
| Technical Reviewer | |||
| Privacy Officer | |||
| Security Officer | |||
| AI Risk Officer | |||
| Legal Counsel | |||
| Executive Sponsor (Critical only) |
10. Change History
| Version | Date | Author | Changes |
|---|---|---|---|
11. Attachments & Evidence
| Document | Location | Date |
|---|---|---|
| Training Data Documentation | ||
| Evaluation Results | ||
| Red Team Report | ||
| Bias Testing Report | ||
| Privacy Impact Assessment | ||
| Threat Model | ||
| Incident Response Plan |
EU AI Act Technical Documentation Checklist (Article 11)
For High-Risk AI Systems
| Requirement | Complete | Location |
|---|---|---|
| General description of AI system | [ ] | |
| Detailed description of elements and development | [ ] | |
| Monitoring, functioning, control description | [ ] | |
| Risk management system description | [ ] | |
| Changes through lifecycle | [ ] | |
| Harmonized standards applied | [ ] | |
| Design and development decisions | [ ] | |
| Design specifications (inputs, outputs, logic) | [ ] | |
| Human oversight measures | [ ] | |
| Expected lifetime and maintenance | [ ] | |
| EU declaration of conformity | [ ] |
Next Step: Proceed to Artifact 7: Algorithmic Impact Assessment (AIA)
CODITECT AI Risk Management Framework
Document ID: AI-RMF-06 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel