Skip to main content

AI Intake & Registration Form

"The Front Door" for AI Initiatives


Document Control

FieldDetails
Document TypeForm Template / Schema
PurposeCapture essential metadata about AI initiatives for classification, tracking, and governance
Completed ByProject Lead, Product Owner, or Technical Lead
Versionv2.0

Instructions

Complete this form for all new AI initiatives, including:

  • New AI/ML models or systems
  • Vendor AI tool procurement or enablement
  • Major changes to existing AI systems
  • New use cases for existing models
  • Agentic AI deployments

Submission Deadline: Before any development work begins or vendor contract is signed

Expected Processing Time:

Anticipated TierProcessing Time
Low1-3 business days
Medium3-5 business days
High5-10 business days
Critical10-20 business days

Section 1: Basic Information

1.1 Project Identification

FieldResponse
Project Name[Short, descriptive name]
Inventory ID[Auto-assigned after submission]
Submission Date[YYYY-MM-DD]
Business Unit / Department[Select: Marketing, HR, Finance, Engineering, Legal, Operations, Customer Service, Product, Other]

1.2 Problem Statement

What specific business problem are you solving? (1-2 sentences)

[Your response here]

1.3 AI Solution Description

How does the AI solve this problem? Briefly describe the functionality.

[Your response here]

1.4 Project Stage

Select current stage:

  • Idea / Concept
  • Proof of Concept / Prototype
  • Development
  • Pre-Production / Testing
  • Live in Production (Retroactive Registration)

1.5 Key Roles

RoleNameEmailDepartment
Business Owner (Accountable Exec)
Technical Lead (Responsible Engineer)
Project Manager (Day-to-day contact)
Data Owner

Section 2: Type of AI

2.1 Source of Model

Select one:

  • Internal Build: Training/building a model from scratch
  • Commercial / Vendor: Purchasing a tool with embedded AI (SaaS)
  • Open Source: Using an open-source model (e.g., Llama, Mistral) hosted internally
  • Hybrid: Fine-tuning a foundational model (e.g., OpenAI API, Anthropic) with our data
  • Agentic: Deploying autonomous AI agents

2.2 Model Category

Select all that apply:

  • Generative AI (Text): LLMs, Chatbots, Summarization
  • Generative AI (Code): Code generation, completion, review
  • Generative AI (Media): Image, Video, Audio generation
  • Predictive/Classification: Forecasting, Scoring, Fraud detection, Sentiment analysis
  • Computer Vision: OCR, Object detection, Image analysis
  • Recommender: Personalization, "Next best action"
  • Agentic AI: Autonomous agents, tool-using AI
  • Multi-Agent System: Multiple coordinated AI agents
  • Other: [Specify]

2.3 Vendor Details (If Commercial/Hybrid)

FieldResponse
Vendor Name
Product/Service Name
Contract Status[Not started / In negotiation / Signed]
Data Processing Location[US / EU / Other: specify]
IP Indemnification Provided?[Yes / No / Unsure]
Zero Data Retention Clause?[Yes / No / Unsure]
SOC 2 / ISO 27001 Certified?[Yes / No / Unsure]

Section 3: Risk Classification Inputs

3.1 Data Sensitivity

Select the HIGHEST applicable category:

  • Public: Open web data, no internal secrets
  • Internal: Non-sensitive corporate data (wikis, policies)
  • Confidential: Customer/Employee PII (names, emails), aggregated business metrics
  • Restricted/Secret: SPI (Health, Financial, Biometrics), MNPI, Passwords, Keys

List specific data types used:

[Your response here]

3.2 Impact of Failure

What happens if the AI is wrong?

  • Annoyance: Users ignore it; no harm done
  • Operational: Manual rework required; minor efficiency loss
  • Financial/Legal: Loss of money, regulatory fine, or discrimination against a user
  • Critical: Physical safety risk, major infrastructure outage, or severe reputational crisis

Describe potential failure scenarios:

[Your response here]

3.3 Level of Autonomy

  • Human-in-the-Loop: AI provides draft/suggestion; Human must approve before action
  • Human-on-the-Loop: AI acts automatically; Human monitors logs/dashboards to intervene if needed
  • Human-out-of-the-Loop: Fully autonomous execution without real-time oversight

3.4 User Impact Scope

  • Internal Staff (Non-critical functions)
  • Internal Staff (HR/Performance related)
  • External Customers (Support/Information)
  • External Customers (Financial/Medical/Legal decisions)
  • General Public
  • Vulnerable Populations (Children, Elderly, Patients)

3.5 Scale of Deployment

FieldResponse
Estimated number of users
Estimated decisions/predictions per day
Geographic scope[Single location / National / International / EU market]

Section 4: Technical & Security Profile

4.1 Hosting Location

  • On-Premise / Private Cloud (Internal VPC)
  • Vendor Cloud (SaaS)
  • Public API (e.g., OpenAI, Anthropic public endpoints)
  • Hybrid (combination)

4.2 Data Handling

QuestionResponse
Will user inputs be used to train the model?[Yes / No]
Does the vendor retain our data for their own use?[Yes / No / Unsure]
Is there an opt-out mechanism for data usage?[Yes / No / N/A]
Data retention period[Specify duration]

4.3 Integration Points

QuestionResponse
Does this system write to core databases?[Yes / No]
Does it have access to email or messaging (Slack/Teams)?[Yes / No]
Does it integrate with other AI systems?[Yes / No]
Does it have internet access?[Yes / No]
Does it have access to internal APIs?[Yes / No]

4.4 Agentic AI Specifics (If Applicable)

QuestionResponse
Can the agent take actions without human approval?[Yes / No]
What tools does the agent have access to?[List tools]
Are there action boundaries defined?[Yes / No]
Is there a kill switch mechanism?[Yes / No]
Is this part of a multi-agent system?[Yes / No]
Can the agent modify its own behavior?[Yes / No]

Section 5: Regulatory Applicability

5.1 EU AI Act Assessment

Does this system fall into any EU AI Act high-risk category?

  • Biometrics (remote identification, categorization)
  • Critical Infrastructure (water, gas, electricity, transport)
  • Education (access decisions, learning assessment, proctoring)
  • Employment (recruitment, screening, promotion, termination)
  • Essential Services (credit scoring, insurance pricing)
  • Law Enforcement (risk assessment, profiling)
  • Migration (visa, asylum, border control)
  • Justice (legal research, court assistance)
  • None of the above

5.2 GPAI Model Assessment

  • This is/uses a General-Purpose AI model
  • Training compute exceeds 10²³ FLOPS
  • Training compute exceeds 10²⁵ FLOPS (systemic risk threshold)
  • Not applicable

5.3 Other Regulatory Requirements

Select all that apply:

  • GDPR
  • HIPAA
  • PCI-DSS
  • SOX
  • CCPA/CPRA
  • Financial Services Regulations (specify)
  • Industry-specific regulations (specify)
  • Other: [Specify]

Section 6: Third-Party AI Components (AI-BOM)

List all third-party AI components used:

ComponentProviderVersionLicense TypePurpose

6.1 Foundation Model Details (If Applicable)

FieldResponse
Model Name/Version
Provider
Training Data Cutoff Date
Known Limitations

Section 7: Acknowledgement

7.1 Submitter Acknowledgement

By submitting this form, I confirm that:

  • I am the owner (or delegate) of this initiative
  • The information provided is accurate to the best of my knowledge
  • I understand this system cannot deploy to Production until Risk Tiering is confirmed and required controls are met
  • I will notify AI Governance of any material changes to this system
  • I accept responsibility for ensuring this system complies with Enterprise AI Policy
FieldResponse
Submitter Name
Submitter Title
Submission Date

Section 8: For Governance Use Only

This section completed by AI Governance team

FieldResponse
Date Received
Initial Reviewer
Preliminary Risk Tier[Low / Medium / High / Critical]
EU AI Act Classification[Prohibited / High-Risk / GPAI / Limited Risk / Minimal Risk]
Required Documentation
Next Steps
Target Review Date

8.1 Classification Rationale

[Governance team notes]

8.2 Required Actions

ActionOwnerDue DateStatus

Document History

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release
2.02026-01-15AI Governance OfficeAdded EU AI Act assessment, GPAI fields, agentic AI section, AI-BOM

Next Step: Proceed to Artifact 5: Enterprise AI Policy & Standard


CODITECT AI Risk Management Framework

Document ID: AI-RMF-04 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel