AI Governance Quick-Start Guide for SMBs
Document Type: Implementation Guide
Target Audience: Small and Medium Businesses (1-500 employees)
Framework Alignment: NIST AI RMF 2.0, EU AI Act, ISO/IEC 42001
Version: 1.0
1. Why This Matters for SMBs
1.1 The SMB AI Reality
| Challenge | SMB Reality | This Guide's Solution |
|---|---|---|
| Limited resources | No dedicated AI governance team | Lightweight, proportionate controls |
| Budget constraints | Cannot afford enterprise tools | Free/low-cost tool alternatives |
| Regulatory pressure | Same EU AI Act applies | Simplified compliance path |
| Competitive pressure | Must adopt AI to compete | Enable safe AI adoption |
1.2 Minimum Viable Governance
This guide provides the minimum governance needed to:
- ✓ Comply with EU AI Act requirements
- ✓ Reduce AI-related risks
- ✓ Enable customer/partner confidence
- ✓ Avoid costly incidents
2. 5-Step Quick-Start
Step 1: Know Your AI (Week 1)
Goal: Create a simple inventory of all AI in use.
Action: Complete this spreadsheet for every AI system:
| AI System | Vendor/Type | Purpose | Data Used | Owner |
|---|---|---|---|---|
| ChatGPT | OpenAI API | Customer support drafts | Customer queries | [Name] |
| Copilot | Microsoft | Code assistance | Source code | [Name] |
| [Your AI] |
Time Required: 2-4 hours
Step 2: Classify Your Risk (Week 1)
Goal: Determine risk level for each AI system.
Simple Risk Classification:
┌─────────────────────────────────────────────────────────────┐
│ QUESTION 1: Does the AI make or influence decisions about: │
│ • Hiring/firing people │
│ • Credit/loan approvals │
│ • Medical diagnosis │
│ • Legal matters │
│ • Law enforcement │
│ • Education access │
│ • Critical infrastructure │
│ │
│ YES → HIGH RISK (needs more governance) │
│ NO → Continue to Question 2 │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ QUESTION 2: Does the AI process sensitive data? │
│ • Health information │
│ • Financial data │
│ • Biometric data │
│ • Children's data │
│ │
│ YES → MEDIUM RISK │
│ NO → LOW RISK │
└─────────────────────────────────────────────────────────────┘
Time Required: 1-2 hours
Step 3: Set Basic Rules (Week 2)
Goal: Establish simple usage policies.
One-Page AI Policy (adapt for your company):
# [COMPANY NAME] AI Usage Policy
## What Employees CAN Do:
- Use approved AI tools (see list below)
- Use AI to draft documents, emails, code
- Use AI for research and analysis
## What Employees CANNOT Do:
- Enter customer personal information into public AI tools
- Enter passwords, API keys, or secrets
- Use AI for final hiring/firing decisions without human review
- Use AI outputs without verification
- Use non-approved AI tools for work
## Approved AI Tools:
1. [Tool 1] - For [Purpose]
2. [Tool 2] - For [Purpose]
3. [Add your approved tools]
## Questions?
Contact: [Name/Email]
Effective: 2026-01-15
Time Required: 2-4 hours
Step 4: Add Basic Safeguards (Week 2-3)
Goal: Implement minimum safety controls.
Safeguard Checklist:
| Safeguard | How to Implement | Cost | Done? |
|---|---|---|---|
| AI tool approval | IT must approve before use | Free | [ ] |
| Human review | Require review before publishing AI outputs | Free | [ ] |
| Data rules | Block PII in prompts (training + awareness) | Free | [ ] |
| Vendor vetting | Check vendor has security certifications | Free | [ ] |
| Usage logging | Enable logging in AI tools | Varies | [ ] |
Time Required: 4-8 hours
Step 5: Document and Review (Week 3-4)
Goal: Create minimal documentation for compliance.
SMB Documentation Kit:
| Document | Template | Time to Complete |
|---|---|---|
| AI Inventory | Spreadsheet (Step 1) | 2-4 hours |
| AI Policy | One-pager (Step 3) | 2-4 hours |
| Risk Assessment | Simple checklist per AI | 1 hour each |
| Vendor Review | Basic questionnaire | 30 min each |
Ongoing: Review quarterly (put it in your calendar!)
Total Time to Basic Compliance: ~20-30 hours
3. EU AI Act Essentials for SMBs
3.1 What You Must Know
| If you... | Your obligations | Deadline |
|---|---|---|
| Use AI for hiring | High-risk requirements | Aug 2026 |
| Use AI for credit decisions | High-risk requirements | Aug 2026 |
| Use ChatGPT/Claude for work | Transparency + data rules | Now |
| Build AI products | Varies by risk level | Varies |
| Operate only in non-EU markets | Check local laws | - |
3.2 Prohibited AI Uses
NEVER use AI for:
- Scoring people based on social behavior
- Exploiting vulnerabilities of specific groups
- Real-time facial recognition in public (without authorization)
- Inferring emotions in workplace for performance evaluation
- Creating databases through mass facial recognition scraping
3.3 SMB EU AI Act Compliance Path
If you ONLY USE AI (don't build it):
| Step | Action | Priority |
|---|---|---|
| 1 | Inventory your AI tools | High |
| 2 | Check for prohibited uses | High |
| 3 | Classify risk levels | Medium |
| 4 | Train employees on AI literacy | Medium |
| 5 | Document your AI use | Medium |
| 6 | Monitor vendor compliance | Low |
4. Templates and Tools
4.1 AI Inventory Spreadsheet
Download or copy this structure:
| Column | Description | Example |
|---|---|---|
| AI System Name | What you call it | "Customer Chatbot" |
| Vendor | Who provides it | "OpenAI" |
| Product | Specific product | "ChatGPT API" |
| Purpose | What it does | "Answer customer questions" |
| Risk Level | Low/Medium/High | "Medium" |
| Data Types | What data it sees | "Customer queries, no PII" |
| Owner | Who's responsible | "Jane Smith" |
| Contract End | When contract expires | "2025-12-31" |
| Last Review | When you last checked it | "2025-01-15" |
4.2 Simple Vendor Assessment
Before using a new AI vendor, check:
| Question | Good Answer | Your Vendor |
|---|---|---|
| Do they have SOC 2 certification? | Yes | [ ] Yes [ ] No |
| Do they have a privacy policy? | Yes | [ ] Yes [ ] No |
| Do they train on your data? | No (or opt-out available) | [ ] Yes [ ] No |
| Do they have terms of service? | Yes | [ ] Yes [ ] No |
| Is there a data processing agreement? | Yes | [ ] Yes [ ] No |
| Can you delete your data? | Yes | [ ] Yes [ ] No |
Score: 5-6 Yes = Good | 3-4 Yes = Acceptable with caution | <3 Yes = Consider alternatives
4.3 AI Decision Documentation (For High-Risk)
When AI influences important decisions, document:
# AI-Assisted Decision Record
Date: 2026-01-15
Decision: [What was decided]
AI System Used: [Which AI tool]
AI Output: [What the AI said/recommended]
Human Review: [Who reviewed it]
Final Decision: [What was actually decided]
Justification: [Why this decision was made]
Signed: _______________
5. Free and Low-Cost Tools
5.1 Documentation
- Google Sheets/Excel: AI inventory, risk tracking
- Google Docs/Notion: Policies, procedures
- GitHub/GitLab: Version control for AI code
5.2 Monitoring
- Cloud provider built-in: AWS CloudWatch, Azure Monitor, GCP Logging
- Open source: Prometheus + Grafana (free, self-hosted)
5.3 Compliance
- NIST AI RMF Playbook: Free guidance from NIST
- EU AI Act templates: Free from industry associations
- This framework: Adapt enterprise templates for SMB scale
6. When to Get Help
6.1 DIY vs. Get Help
| Situation | DIY Feasible? | Consider Getting Help |
|---|---|---|
| Using ChatGPT for internal tasks | ✓ Yes | If handling sensitive data |
| Building AI into your product | Maybe | Likely yes |
| AI makes decisions about people | No | Yes - legal review needed |
| Selling into regulated industries | No | Yes - compliance review |
| Large-scale AI deployment | No | Yes - architecture review |
6.2 Types of Help Available
| Help Type | Cost Range | When to Use |
|---|---|---|
| Legal review | $2,000-10,000 | High-risk AI, EU market |
| Compliance consultant | $5,000-25,000 | Building compliance program |
| ISO 42001 certification | $10,000-50,000 | Enterprise customers require |
| Security assessment | $5,000-20,000 | Handling sensitive data |
7. Common SMB Mistakes to Avoid
7.1 Top 10 Mistakes
| Mistake | Why It's Bad | How to Avoid |
|---|---|---|
| No AI inventory | Can't manage what you don't know | Do Step 1 first |
| Ignoring EU AI Act | Fines up to 7% of revenue | Do the basics |
| Putting secrets in AI | Data breach risk | Train employees |
| No human review | AI makes mistakes | Require review |
| Using unapproved tools | Shadow AI risk | Create approved list |
| No vendor vetting | Third-party risk | Basic checks |
| Assuming AI is always right | Hallucinations happen | Verify outputs |
| No documentation | Audit failure | Keep simple records |
| Over-engineering | Wastes limited resources | Start simple |
| Doing nothing | Falling behind competitors | Start today |
8. Growth Path
8.1 As You Grow, Add More
| Stage | Team Size | AI Governance Level |
|---|---|---|
| Startup | 1-10 | This quick-start guide |
| Growing SMB | 10-50 | Add formal policies |
| Scaling SMB | 50-200 | Add dedicated owner |
| Pre-Enterprise | 200-500 | Consider full framework |
| Enterprise | 500+ | Full enterprise framework |
8.2 Maturity Progression
Level 1: Ad-Hoc Level 2: Basic Level 3: Managed
(Where many SMBs start) (This guide gets you (Next step)
here)
• No inventory • AI inventory exists • Inventory automated
• No policies • Basic policy • Full policy set
• No oversight • Someone responsible • Governance process
• No documentation • Basic documentation • Complete records
9. Quick Reference Card
9.1 SMB AI Governance Cheat Sheet
MUST DO (Legal Requirements):
- Don't use prohibited AI (social scoring, etc.)
- Disclose AI use to customers where required
- Keep records of AI decisions (especially high-risk)
- Train employees on AI use
SHOULD DO (Risk Management):
- Maintain AI inventory
- Vet AI vendors before use
- Have approval process for new AI
- Review AI outputs before publishing
- Document AI-assisted decisions
NICE TO HAVE (Maturity):
- Formal AI policy
- Regular AI reviews
- Monitoring dashboards
- Incident response plan
9.2 Emergency Contacts
If AI Goes Wrong:
- Stop using the AI system
- Document what happened
- Contact your vendor
- Notify affected parties
- Report serious incidents to authorities (EU AI Act)
10. Checklist: Your First 30 Days
Week 1
- Complete AI inventory spreadsheet
- Classify each AI by risk level
- Identify high-risk AI (if any)
Week 2
- Draft one-page AI policy
- Get leadership sign-off on policy
- Communicate policy to employees
Week 3
- Complete vendor assessment for top 3 AI vendors
- Identify any gaps in vendor compliance
- Set up basic access controls
Week 4
- Brief employees on AI policy
- Set up quarterly review calendar
- Document your governance setup
- Celebrate! You now have basic AI governance.
Document Control
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-06-15 | AI Governance Office | Initial release |
Need More Help?
- NIST AI RMF resources: https://www.nist.gov/itl/ai-risk-management-framework
- EU AI Act info: https://artificialintelligenceact.eu
- ISO 42001 overview: https://www.iso.org/standard/42001
This guide is designed to be completed in under 30 hours. Start today!
CODITECT AI Risk Management Framework
Document ID: AI-RMF-17 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel