Enterprise AI Policy & Standard
Acceptable Use and Development Requirements
Document Control
| Field | Details |
|---|
| Document Type | Enterprise Policy |
| Applies To | All Employees, Contractors, Developers, and Third-Party Vendors |
| Effective Date | 2026-01-15 |
| Version | v2.0 |
| Review Cadence | Annual minimum |
| Policy Owner | AI Governance Lead / AI Risk Officer |
1. Purpose
This policy establishes the minimum mandatory standards for the development, procurement, and usage of Artificial Intelligence (AI) systems within the organization. Its purpose is to mitigate legal, security, privacy, and reputational risks while enabling responsible innovation.
2. Scope
This policy applies to:
| Scope | Coverage |
|---|
| Built AI | Models trained, fine-tuned, or developed internally |
| Bought AI | Vendor software, SaaS, or platforms with embedded AI features |
| Employee Use | Usage of AI tools (public or enterprise) for business tasks |
| Agentic AI | Autonomous agents, multi-agent systems, tool-using AI |
| GPAI Models | General-purpose AI models placed on market |
3. The "Red Lines" (Prohibited Uses)
The following uses of AI are strictly prohibited unless an explicit, written waiver is granted by the AI Governance Board and Legal Counsel. No business justification overrides these prohibitions.
3.1 EU AI Act Prohibited Practices (Article 5)
| Prohibited Use | Description |
|---|
| Social Scoring | AI systems that evaluate trustworthiness or social standing based on social behavior or predicted personality traits |
| Subliminal Manipulation | AI designed to distort behavior or manipulate decision-making causing physical or psychological harm |
| Exploitation of Vulnerabilities | AI that exploits vulnerabilities related to age, disability, or socio-economic situation |
| Real-Time Remote Biometric Identification | In publicly accessible spaces (except narrow law enforcement exemptions) |
| Emotion Recognition | In workplace or educational institutions for performance/behavior evaluation |
| Biometric Categorization | Inferring race, political opinions, religious beliefs, sexual orientation from biometrics |
| Untargeted Facial Recognition Scraping | Creating facial recognition databases through untargeted scraping |
| Predictive Policing | Risk assessments based solely on profiling or personality traits |
3.2 Additional Enterprise Prohibitions
| Prohibited Use | Description |
|---|
| Automated Termination | AI systems that make final decisions on hiring or firing without human review |
| Undisclosed Deepfakes | Generating synthetic media of real persons without consent and clear disclosure |
| Medical/Legal Advice | AI providing diagnosis or legal advice without professional oversight |
| Autonomous Weapons | AI systems designed to cause harm to humans |
| Mass Surveillance | Continuous monitoring of employee behavior without consent |
4. Standards for AI Builders (Developers & Data Scientists)
4.1 Data Management
| Requirement | Standard |
|---|
| Data Separation | Production data must never be used for training/fine-tuning unless anonymized/de-identified and approved by Privacy Office |
| Data Lineage | All training datasets must be documented (source, collection method, rights to use) |
| Poisoning Prevention | Training data pipelines must be secured against unauthorized modification |
| Bias Detection | Training data must be analyzed for demographic representation issues |
| Copyright Compliance | Training data must be reviewed for copyright compliance; maintain documentation |
4.2 Model Security
| Requirement | Standard |
|---|
| No Hardcoded Secrets | API keys, credentials, tokens must never be embedded in code or notebooks |
| Safe Serialization | Only safe serialization formats (e.g., Safetensors); pickle from untrusted sources prohibited |
| Adversarial Testing | High-risk models must undergo red teaming before deployment |
| Model Provenance | Document model origin, training process, and all modifications |
| Vulnerability Management | Monitor for CVEs affecting model libraries; patch within SLA |
4.3 Development Lifecycle
| Requirement | Standard |
|---|
| Version Control | All models must be versioned in the Model Registry |
| Reproducibility | Training code and hyperparameters must be archived |
| Evaluation | No model promoted to production without passing defined metrics (accuracy + fairness) |
| Documentation | System Card/Model Card required for all production models |
| Testing | Automated testing pipeline required for all models |
4.4 Agentic AI Development
| Requirement | Standard |
|---|
| Action Boundaries | All permitted actions must be explicitly defined and documented |
| Permission Scoping | Least privilege principle for tool access |
| Audit Trail | All agent actions must be logged with context |
| Kill Switch | Tested mechanism to immediately halt agent operation |
| Rate Limiting | Implement rate limits on agent actions |
| Sandboxing | Test agents in isolated environments before production |
5. Standards for General Employees (User Rules)
The "No Secrets" Rule: Never input the following into public AI tools (e.g., ChatGPT, Claude public, Copilot):
| Prohibited Input | Examples |
|---|
| Customer PII | Names, SSNs, Emails, Phone numbers |
| Intellectual Property | Unreleased code, patent drafts, strategic plans |
| Security Credentials | Passwords, API keys, certificates |
| Contractual/Legal Documents | Contracts, NDAs, legal correspondence |
| Financial Data | Non-public financial information, projections |
| Confidential Business Data | M&A plans, competitive intelligence |
- Employees must only use AI tools listed in the Approved Software Directory
- Usage of "Shadow AI" (unauthorized AI tools) is a policy violation
- Personal accounts for AI tools must not be used for business purposes
- Report unauthorized AI tool usage to AI Governance
5.3 Verification and Accountability
| Principle | Requirement |
|---|
| Human Responsibility | You are responsible for the output of any AI tool you use |
| Verification Required | Verify facts, calculations, and code generated by AI before use |
| No Blind Automation | AI outputs must not connect directly to execution without review |
| Hallucination Awareness | Understand that AI can generate plausible but false information |
5.4 Transparency and Disclosure
| Requirement | Standard |
|---|
| Don't Impersonate Humans | AI chatbots must identify as AI at interaction start |
| Labeling | AI-generated content for publication should be labeled internally |
| Provenance | Maintain records of what was AI-generated vs. human-created |
| Watermarking | Apply watermarks to AI-generated media where feasible |
6. Standards for Procurement (Buying AI)
6.1 Vendor Due Diligence
| Requirement | Standard |
|---|
| AI Disclosure | Vendors must disclose AI/ML use and training data practices |
| Security Certification | SOC 2 Type II or ISO 27001 required for enterprise AI vendors |
| GPAI Compliance | For GPAI models, vendor must demonstrate EU AI Act compliance |
| Audit Rights | Contract must include right to audit AI systems |
6.2 Contractual Requirements
| Clause | Requirement |
|---|
| Opt-Out Rights | Vendor must not use our data to train their models |
| IP Indemnification | GenAI vendors should provide copyright indemnification |
| Data Processing Agreement | Required for any AI processing personal data |
| Incident Notification | Vendor must notify us of AI-related incidents within 24 hours |
| Model Transparency | Documentation on model capabilities and limitations required |
6.3 GPAI Provider Requirements (EU AI Act)
For vendors providing GPAI models:
- Technical documentation must be available
- Transparency report on capabilities and limitations
- Training data summary
- Copyright compliance policy
- For systemic risk models: safety evaluation, red teaming results
7. Training Requirements
7.1 Mandatory Training by Role
| Role | Training Required | Frequency |
|---|
| All Employees | AI Awareness & Policy | Annual |
| AI Developers | Secure AI Development, Bias Testing | Annual |
| Product Managers | AI Risk Assessment, Ethical AI | Annual |
| AI System Owners | Governance Lifecycle, Incident Response | Annual |
| Data Scientists | Model Risk Management, Fairness Testing | Annual |
| Executives | AI Governance Overview, Risk Appetite | Annual |
7.2 AI Literacy Program (EU AI Act Compliance)
All staff interacting with AI systems must:
- Understand basic AI capabilities and limitations
- Recognize potential AI risks
- Know when to escalate concerns
- Understand transparency requirements
8. Monitoring and Incident Reporting
| Requirement | Standard |
|---|
| Drift Monitoring | High-Risk models must be monitored for performance drift |
| Threshold Actions | If performance drops below threshold, model must be taken offline or retrained |
| Bias Monitoring | Ongoing monitoring for disparate impact |
8.2 Incident Reporting
Employees must immediately report:
| Incident Type | Reporting Channel |
|---|
| AI producing harmful, discriminatory, or illegal content | AI Governance + Security |
| Suspected data leakage into AI model | Security + Privacy |
| Unexpected autonomous behavior | AI Governance + Security |
| Prompt injection or jailbreak attempt | Security |
| AI hallucination causing material harm | AI Governance |
| Agentic AI taking unauthorized actions | AI Governance + Security (immediate halt) |
8.3 Serious Incident Notification (EU AI Act)
For High-Risk AI systems, serious incidents must be:
- Reported to AI Risk Officer within 24 hours
- Documented in incident management system
- Notified to competent authorities as required
- Root cause analysis completed within 30 days
9. Compliance and Enforcement
9.1 Audit Rights
Internal Audit reserves the right to audit any AI system, including:
- Source code and model architecture
- Training data and evaluation datasets
- Outputs and decision logs
- Documentation and evidence
9.2 Non-Compliance Consequences
| Violation Severity | Potential Consequences |
|---|
| Minor (first offense) | Coaching and additional training |
| Moderate | Formal warning, remediation plan |
| Serious | Disciplinary action up to termination |
| Critical | Termination, potential legal action |
9.3 Exception Process
Exceptions to this policy:
- Must be formally documented
- Require AI Governance Board approval
- Are time-bound (maximum 90 days)
- Require compensating controls
- Must be reviewed at expiration
10. Policy Governance
10.1 Policy Review
- Annual Review: Full policy review by AI Governance Board
- Regulatory Updates: Policy updated within 60 days of relevant regulatory changes
- Incident-Driven Updates: Policy reviewed after significant incidents
10.2 Questions and Support
| Support Type | Contact |
|---|
| Policy Questions | ai-governance@[company].com |
| Incident Reporting | security-incident@[company].com |
| Training | learning@[company].com |
| Tool Approval | procurement@[company].com |
11. Approvals
| Role | Name | Signature | Date |
|---|
| AI Governance Lead | | | |
| Chief Information Security Officer | | | |
| Chief Legal Officer | | | |
| Chief Privacy Officer | | | |
| Chief Human Resources Officer | | | |
Document History
| Version | Date | Author | Changes |
|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
| 2.0 | 2026-01-15 | AI Governance Office | Added EU AI Act prohibited practices, GPAI requirements, agentic AI standards, training requirements |
Next Step: Proceed to Artifact 6: AI System Card (Model Card) Template
CODITECT AI Risk Management Framework
Document ID: AI-RMF-05 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework.
For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework
Last Updated: 2026-01-15
Owner: AZ1.AI Inc. | Lead: Hal Casteel