Skip to main content

Enterprise AI Policy & Standard

Acceptable Use and Development Requirements


Document Control

FieldDetails
Document TypeEnterprise Policy
Applies ToAll Employees, Contractors, Developers, and Third-Party Vendors
Effective Date2026-01-15
Versionv2.0
Review CadenceAnnual minimum
Policy OwnerAI Governance Lead / AI Risk Officer

1. Purpose

This policy establishes the minimum mandatory standards for the development, procurement, and usage of Artificial Intelligence (AI) systems within the organization. Its purpose is to mitigate legal, security, privacy, and reputational risks while enabling responsible innovation.


2. Scope

This policy applies to:

ScopeCoverage
Built AIModels trained, fine-tuned, or developed internally
Bought AIVendor software, SaaS, or platforms with embedded AI features
Employee UseUsage of AI tools (public or enterprise) for business tasks
Agentic AIAutonomous agents, multi-agent systems, tool-using AI
GPAI ModelsGeneral-purpose AI models placed on market

3. The "Red Lines" (Prohibited Uses)

The following uses of AI are strictly prohibited unless an explicit, written waiver is granted by the AI Governance Board and Legal Counsel. No business justification overrides these prohibitions.

3.1 EU AI Act Prohibited Practices (Article 5)

Prohibited UseDescription
Social ScoringAI systems that evaluate trustworthiness or social standing based on social behavior or predicted personality traits
Subliminal ManipulationAI designed to distort behavior or manipulate decision-making causing physical or psychological harm
Exploitation of VulnerabilitiesAI that exploits vulnerabilities related to age, disability, or socio-economic situation
Real-Time Remote Biometric IdentificationIn publicly accessible spaces (except narrow law enforcement exemptions)
Emotion RecognitionIn workplace or educational institutions for performance/behavior evaluation
Biometric CategorizationInferring race, political opinions, religious beliefs, sexual orientation from biometrics
Untargeted Facial Recognition ScrapingCreating facial recognition databases through untargeted scraping
Predictive PolicingRisk assessments based solely on profiling or personality traits

3.2 Additional Enterprise Prohibitions

Prohibited UseDescription
Automated TerminationAI systems that make final decisions on hiring or firing without human review
Undisclosed DeepfakesGenerating synthetic media of real persons without consent and clear disclosure
Medical/Legal AdviceAI providing diagnosis or legal advice without professional oversight
Autonomous WeaponsAI systems designed to cause harm to humans
Mass SurveillanceContinuous monitoring of employee behavior without consent

4. Standards for AI Builders (Developers & Data Scientists)

4.1 Data Management

RequirementStandard
Data SeparationProduction data must never be used for training/fine-tuning unless anonymized/de-identified and approved by Privacy Office
Data LineageAll training datasets must be documented (source, collection method, rights to use)
Poisoning PreventionTraining data pipelines must be secured against unauthorized modification
Bias DetectionTraining data must be analyzed for demographic representation issues
Copyright ComplianceTraining data must be reviewed for copyright compliance; maintain documentation

4.2 Model Security

RequirementStandard
No Hardcoded SecretsAPI keys, credentials, tokens must never be embedded in code or notebooks
Safe SerializationOnly safe serialization formats (e.g., Safetensors); pickle from untrusted sources prohibited
Adversarial TestingHigh-risk models must undergo red teaming before deployment
Model ProvenanceDocument model origin, training process, and all modifications
Vulnerability ManagementMonitor for CVEs affecting model libraries; patch within SLA

4.3 Development Lifecycle

RequirementStandard
Version ControlAll models must be versioned in the Model Registry
ReproducibilityTraining code and hyperparameters must be archived
EvaluationNo model promoted to production without passing defined metrics (accuracy + fairness)
DocumentationSystem Card/Model Card required for all production models
TestingAutomated testing pipeline required for all models

4.4 Agentic AI Development

RequirementStandard
Action BoundariesAll permitted actions must be explicitly defined and documented
Permission ScopingLeast privilege principle for tool access
Audit TrailAll agent actions must be logged with context
Kill SwitchTested mechanism to immediately halt agent operation
Rate LimitingImplement rate limits on agent actions
SandboxingTest agents in isolated environments before production

5. Standards for General Employees (User Rules)

5.1 Usage of Generative AI (Public Tools)

The "No Secrets" Rule: Never input the following into public AI tools (e.g., ChatGPT, Claude public, Copilot):

Prohibited InputExamples
Customer PIINames, SSNs, Emails, Phone numbers
Intellectual PropertyUnreleased code, patent drafts, strategic plans
Security CredentialsPasswords, API keys, certificates
Contractual/Legal DocumentsContracts, NDAs, legal correspondence
Financial DataNon-public financial information, projections
Confidential Business DataM&A plans, competitive intelligence

5.2 Approved Tools Only

  • Employees must only use AI tools listed in the Approved Software Directory
  • Usage of "Shadow AI" (unauthorized AI tools) is a policy violation
  • Personal accounts for AI tools must not be used for business purposes
  • Report unauthorized AI tool usage to AI Governance

5.3 Verification and Accountability

PrincipleRequirement
Human ResponsibilityYou are responsible for the output of any AI tool you use
Verification RequiredVerify facts, calculations, and code generated by AI before use
No Blind AutomationAI outputs must not connect directly to execution without review
Hallucination AwarenessUnderstand that AI can generate plausible but false information

5.4 Transparency and Disclosure

RequirementStandard
Don't Impersonate HumansAI chatbots must identify as AI at interaction start
LabelingAI-generated content for publication should be labeled internally
ProvenanceMaintain records of what was AI-generated vs. human-created
WatermarkingApply watermarks to AI-generated media where feasible

6. Standards for Procurement (Buying AI)

6.1 Vendor Due Diligence

RequirementStandard
AI DisclosureVendors must disclose AI/ML use and training data practices
Security CertificationSOC 2 Type II or ISO 27001 required for enterprise AI vendors
GPAI ComplianceFor GPAI models, vendor must demonstrate EU AI Act compliance
Audit RightsContract must include right to audit AI systems

6.2 Contractual Requirements

ClauseRequirement
Opt-Out RightsVendor must not use our data to train their models
IP IndemnificationGenAI vendors should provide copyright indemnification
Data Processing AgreementRequired for any AI processing personal data
Incident NotificationVendor must notify us of AI-related incidents within 24 hours
Model TransparencyDocumentation on model capabilities and limitations required

6.3 GPAI Provider Requirements (EU AI Act)

For vendors providing GPAI models:

  • Technical documentation must be available
  • Transparency report on capabilities and limitations
  • Training data summary
  • Copyright compliance policy
  • For systemic risk models: safety evaluation, red teaming results

7. Training Requirements

7.1 Mandatory Training by Role

RoleTraining RequiredFrequency
All EmployeesAI Awareness & PolicyAnnual
AI DevelopersSecure AI Development, Bias TestingAnnual
Product ManagersAI Risk Assessment, Ethical AIAnnual
AI System OwnersGovernance Lifecycle, Incident ResponseAnnual
Data ScientistsModel Risk Management, Fairness TestingAnnual
ExecutivesAI Governance Overview, Risk AppetiteAnnual

7.2 AI Literacy Program (EU AI Act Compliance)

All staff interacting with AI systems must:

  • Understand basic AI capabilities and limitations
  • Recognize potential AI risks
  • Know when to escalate concerns
  • Understand transparency requirements

8. Monitoring and Incident Reporting

8.1 Performance Monitoring

RequirementStandard
Drift MonitoringHigh-Risk models must be monitored for performance drift
Threshold ActionsIf performance drops below threshold, model must be taken offline or retrained
Bias MonitoringOngoing monitoring for disparate impact

8.2 Incident Reporting

Employees must immediately report:

Incident TypeReporting Channel
AI producing harmful, discriminatory, or illegal contentAI Governance + Security
Suspected data leakage into AI modelSecurity + Privacy
Unexpected autonomous behaviorAI Governance + Security
Prompt injection or jailbreak attemptSecurity
AI hallucination causing material harmAI Governance
Agentic AI taking unauthorized actionsAI Governance + Security (immediate halt)

8.3 Serious Incident Notification (EU AI Act)

For High-Risk AI systems, serious incidents must be:

  • Reported to AI Risk Officer within 24 hours
  • Documented in incident management system
  • Notified to competent authorities as required
  • Root cause analysis completed within 30 days

9. Compliance and Enforcement

9.1 Audit Rights

Internal Audit reserves the right to audit any AI system, including:

  • Source code and model architecture
  • Training data and evaluation datasets
  • Outputs and decision logs
  • Documentation and evidence

9.2 Non-Compliance Consequences

Violation SeverityPotential Consequences
Minor (first offense)Coaching and additional training
ModerateFormal warning, remediation plan
SeriousDisciplinary action up to termination
CriticalTermination, potential legal action

9.3 Exception Process

Exceptions to this policy:

  • Must be formally documented
  • Require AI Governance Board approval
  • Are time-bound (maximum 90 days)
  • Require compensating controls
  • Must be reviewed at expiration

10. Policy Governance

10.1 Policy Review

  • Annual Review: Full policy review by AI Governance Board
  • Regulatory Updates: Policy updated within 60 days of relevant regulatory changes
  • Incident-Driven Updates: Policy reviewed after significant incidents

10.2 Questions and Support

Support TypeContact
Policy Questionsai-governance@[company].com
Incident Reportingsecurity-incident@[company].com
Traininglearning@[company].com
Tool Approvalprocurement@[company].com

11. Approvals

RoleNameSignatureDate
AI Governance Lead
Chief Information Security Officer
Chief Legal Officer
Chief Privacy Officer
Chief Human Resources Officer

Document History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release
2.02026-01-15AI Governance OfficeAdded EU AI Act prohibited practices, GPAI requirements, agentic AI standards, training requirements

Next Step: Proceed to Artifact 6: AI System Card (Model Card) Template


CODITECT AI Risk Management Framework

Document ID: AI-RMF-05 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel