Skip to main content

AI System Card / Model Card Template

Technical Transparency & Audit Record


Document Control

FieldResponse
Document Status[Draft / Under Review / Approved / Deprecated]
Associated Registry ID[e.g., AI-2025-001]
Last Updated2026-01-15
Version[e.g., v1.0.0]
Framework AlignmentNIST AI RMF 2.0, EU AI Act Article 11

1. System Identity

FieldResponse
System Name
Version
Model Type[e.g., Fine-tuned LLM, Random Forest, CNN, Vendor SaaS, Agentic System]
Business Owner[Name/Title]
Technical Owner[Name/Title]
Development Date
Production Date
Risk Tier[Low / Medium / High / Critical]
EU AI Act Classification[Minimal / Limited / High-Risk / GPAI / Systemic Risk GPAI]

2. Intended Use & Limitations

2.1 Intended Use Cases

FieldDescription
Primary TaskWhat is this model designed to do?
Target AudienceWho are the users?
Deployment EnvironmentWhere does this system operate?
Expected FrequencyHow often will it be used?

2.2 Out-of-Scope Use Cases (Anti-Patterns)

What should this model NOT be used for?

Prohibited UseReason

2.3 Limitations

Limitation TypeDescription
Knowledge Cutoff(For LLMs) Training data end date
Language SupportSupported languages
Input ConstraintsMaximum input size, format requirements
Environmental FactorsConditions affecting performance
Known WeaknessesSpecific areas of poor performance

3. Data Lineage & Privacy

3.1 Training Data

FieldResponse
Data Sources
Data Collection Period
Data Volume
Preprocessing Steps

3.2 Sensitive Data Assessment

Data CategoryPresentMitigation
PII (Personally Identifiable Information)[Yes/No]
SPI (Sensitive Personal Info - Health/Finance)[Yes/No]
IP (Intellectual Property)[Yes/No]
Biometric Data[Yes/No]

3.3 Data Rights

QuestionResponse
Legal right to use data for training?[Yes / No / Partial]
Commercial use permitted by license?[Yes / No]
Copyright compliance verified?[Yes / No]
Training data summary published? (GPAI)[Yes / No / N/A]

4. Technical Specifications

FieldResponse
Architecture[e.g., Transformer, XGBoost, CNN]
Framework[e.g., PyTorch, TensorFlow, Scikit-learn]
Model Size[Parameters / File size]
Input Format[e.g., Text < 4096 tokens, JSON, Image PNG]
Output Format[e.g., Probability score, Text, Classification]
Compute Resources[e.g., GPU type, cloud service]
Inference Latency (p50/p95)
Throughput[Requests per second]

4.1 Dependencies (AI-BOM)

ComponentVersionLicenseCVE Status

5. Performance & Evaluation

5.1 Evaluation Metrics

MetricThresholdActual ResultPass/Fail
Accuracy / F1 Score
Precision / Recall
Latency (p95)
Throughput
Error Rate
Hallucination Rate (GenAI)
Faithfulness Score (RAG)

5.2 Bias & Fairness Testing (Required for High Risk)

Protected GroupMetricResultWithin Threshold
GenderDemographic Parity
AgeEqualized Odds
Race/EthnicityDisparate Impact Ratio
Other:

Mitigation Steps Taken:

[Describe any de-biasing or mitigation applied]

5.3 Adversarial Testing / Red Teaming

FieldResponse
Test Date
Testing Team[Internal / External / Both]
Scenario TestedOutcomeRemediation
Prompt Injection
Jailbreaking
Data Extraction
Model Extraction
Toxic Output Generation
PII Leakage

Overall Red Team Outcome: [Passed / Failed / Conditional Pass]


6. Risk Assessment

6.1 Risk Register

RiskLikelihoodImpactMitigation StrategyResidual Risk
Hallucination
Data Leakage
Bias/Discrimination
Model Drift
Security Breach
Service Unavailability
Regulatory Non-Compliance

6.2 Threat Model Summary

Threat CategoryRelevantControls Applied
Prompt Injection[Yes/No]
Data Poisoning[Yes/No]
Model Extraction[Yes/No]
Adversarial Inputs[Yes/No]
Supply Chain[Yes/No]

7. Operational Details

7.1 Monitoring Plan

Monitoring TypeToolAlert ThresholdResponse SLA
Performance Drift
Bias Metrics
Error Rate
Latency
Security Events

7.2 Human Oversight

FieldResponse
Human-in-the-loop implemented?[Yes / No]
Who reviews outputs?
Review frequency
Override mechanism available?[Yes / No]
Escalation path defined?[Yes / No]

7.3 Maintenance Schedule

ActivityFrequencyOwner
Model Retraining
Performance Review
Bias Re-evaluation
Security Assessment

7.4 Rollback & Recovery

FieldResponse
Rollback Plan Documented?[Yes / No]
Kill Switch Available?[Yes / No]
Kill Switch Tested?[Yes / No]
Last Rollback Test Date
Recovery Time Objective (RTO)

8. Transparency & Explainability

8.1 Explainability Level

  • Global Explanation: Feature importance, model behavior patterns available
  • Local Explanation: Individual decision explanations (SHAP/LIME) available
  • Counterfactual: "What would need to change" explanations available
  • Black Box: Cannot explain individual outputs (requires executive waiver)

8.2 User-Facing Transparency

RequirementImplementedDetails
AI Disclosure to Users[Yes/No]
Confidence Indicators[Yes/No]
Source Citations (RAG)[Yes/No]
Limitation Warnings[Yes/No]
Appeal/Contest Process[Yes/No]

9. Governance Approvals

RoleNameDateSignature
Model Owner
Technical Reviewer
Privacy Officer
Security Officer
AI Risk Officer
Legal Counsel
Executive Sponsor (Critical only)

10. Change History

VersionDateAuthorChanges

11. Attachments & Evidence

DocumentLocationDate
Training Data Documentation
Evaluation Results
Red Team Report
Bias Testing Report
Privacy Impact Assessment
Threat Model
Incident Response Plan

EU AI Act Technical Documentation Checklist (Article 11)

For High-Risk AI Systems

RequirementCompleteLocation
General description of AI system[ ]
Detailed description of elements and development[ ]
Monitoring, functioning, control description[ ]
Risk management system description[ ]
Changes through lifecycle[ ]
Harmonized standards applied[ ]
Design and development decisions[ ]
Design specifications (inputs, outputs, logic)[ ]
Human oversight measures[ ]
Expected lifetime and maintenance[ ]
EU declaration of conformity[ ]

Next Step: Proceed to Artifact 7: Algorithmic Impact Assessment (AIA)


CODITECT AI Risk Management Framework

Document ID: AI-RMF-06 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel