Skip to main content

AI Governance Quick-Start Guide for SMBs

Document Type: Implementation Guide
Target Audience: Small and Medium Businesses (1-500 employees)
Framework Alignment: NIST AI RMF 2.0, EU AI Act, ISO/IEC 42001
Version: 1.0


1. Why This Matters for SMBs

1.1 The SMB AI Reality

ChallengeSMB RealityThis Guide's Solution
Limited resourcesNo dedicated AI governance teamLightweight, proportionate controls
Budget constraintsCannot afford enterprise toolsFree/low-cost tool alternatives
Regulatory pressureSame EU AI Act appliesSimplified compliance path
Competitive pressureMust adopt AI to competeEnable safe AI adoption

1.2 Minimum Viable Governance

This guide provides the minimum governance needed to:

  • ✓ Comply with EU AI Act requirements
  • ✓ Reduce AI-related risks
  • ✓ Enable customer/partner confidence
  • ✓ Avoid costly incidents

2. 5-Step Quick-Start

Step 1: Know Your AI (Week 1)

Goal: Create a simple inventory of all AI in use.

Action: Complete this spreadsheet for every AI system:

AI SystemVendor/TypePurposeData UsedOwner
ChatGPTOpenAI APICustomer support draftsCustomer queries[Name]
CopilotMicrosoftCode assistanceSource code[Name]
[Your AI]

Time Required: 2-4 hours

Step 2: Classify Your Risk (Week 1)

Goal: Determine risk level for each AI system.

Simple Risk Classification:

┌─────────────────────────────────────────────────────────────┐
│ QUESTION 1: Does the AI make or influence decisions about: │
│ • Hiring/firing people │
│ • Credit/loan approvals │
│ • Medical diagnosis │
│ • Legal matters │
│ • Law enforcement │
│ • Education access │
│ • Critical infrastructure │
│ │
│ YES → HIGH RISK (needs more governance) │
│ NO → Continue to Question 2 │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ QUESTION 2: Does the AI process sensitive data? │
│ • Health information │
│ • Financial data │
│ • Biometric data │
│ • Children's data │
│ │
│ YES → MEDIUM RISK │
│ NO → LOW RISK │
└─────────────────────────────────────────────────────────────┘

Time Required: 1-2 hours

Step 3: Set Basic Rules (Week 2)

Goal: Establish simple usage policies.

One-Page AI Policy (adapt for your company):

# [COMPANY NAME] AI Usage Policy

## What Employees CAN Do:
- Use approved AI tools (see list below)
- Use AI to draft documents, emails, code
- Use AI for research and analysis

## What Employees CANNOT Do:
- Enter customer personal information into public AI tools
- Enter passwords, API keys, or secrets
- Use AI for final hiring/firing decisions without human review
- Use AI outputs without verification
- Use non-approved AI tools for work

## Approved AI Tools:
1. [Tool 1] - For [Purpose]
2. [Tool 2] - For [Purpose]
3. [Add your approved tools]

## Questions?
Contact: [Name/Email]

Effective: 2026-01-15

Time Required: 2-4 hours

Step 4: Add Basic Safeguards (Week 2-3)

Goal: Implement minimum safety controls.

Safeguard Checklist:

SafeguardHow to ImplementCostDone?
AI tool approvalIT must approve before useFree[ ]
Human reviewRequire review before publishing AI outputsFree[ ]
Data rulesBlock PII in prompts (training + awareness)Free[ ]
Vendor vettingCheck vendor has security certificationsFree[ ]
Usage loggingEnable logging in AI toolsVaries[ ]

Time Required: 4-8 hours

Step 5: Document and Review (Week 3-4)

Goal: Create minimal documentation for compliance.

SMB Documentation Kit:

DocumentTemplateTime to Complete
AI InventorySpreadsheet (Step 1)2-4 hours
AI PolicyOne-pager (Step 3)2-4 hours
Risk AssessmentSimple checklist per AI1 hour each
Vendor ReviewBasic questionnaire30 min each

Ongoing: Review quarterly (put it in your calendar!)

Total Time to Basic Compliance: ~20-30 hours


3. EU AI Act Essentials for SMBs

3.1 What You Must Know

If you...Your obligationsDeadline
Use AI for hiringHigh-risk requirementsAug 2026
Use AI for credit decisionsHigh-risk requirementsAug 2026
Use ChatGPT/Claude for workTransparency + data rulesNow
Build AI productsVaries by risk levelVaries
Operate only in non-EU marketsCheck local laws-

3.2 Prohibited AI Uses

NEVER use AI for:

  • Scoring people based on social behavior
  • Exploiting vulnerabilities of specific groups
  • Real-time facial recognition in public (without authorization)
  • Inferring emotions in workplace for performance evaluation
  • Creating databases through mass facial recognition scraping

3.3 SMB EU AI Act Compliance Path

If you ONLY USE AI (don't build it):

StepActionPriority
1Inventory your AI toolsHigh
2Check for prohibited usesHigh
3Classify risk levelsMedium
4Train employees on AI literacyMedium
5Document your AI useMedium
6Monitor vendor complianceLow

4. Templates and Tools

4.1 AI Inventory Spreadsheet

Download or copy this structure:

ColumnDescriptionExample
AI System NameWhat you call it"Customer Chatbot"
VendorWho provides it"OpenAI"
ProductSpecific product"ChatGPT API"
PurposeWhat it does"Answer customer questions"
Risk LevelLow/Medium/High"Medium"
Data TypesWhat data it sees"Customer queries, no PII"
OwnerWho's responsible"Jane Smith"
Contract EndWhen contract expires"2025-12-31"
Last ReviewWhen you last checked it"2025-01-15"

4.2 Simple Vendor Assessment

Before using a new AI vendor, check:

QuestionGood AnswerYour Vendor
Do they have SOC 2 certification?Yes[ ] Yes [ ] No
Do they have a privacy policy?Yes[ ] Yes [ ] No
Do they train on your data?No (or opt-out available)[ ] Yes [ ] No
Do they have terms of service?Yes[ ] Yes [ ] No
Is there a data processing agreement?Yes[ ] Yes [ ] No
Can you delete your data?Yes[ ] Yes [ ] No

Score: 5-6 Yes = Good | 3-4 Yes = Acceptable with caution | <3 Yes = Consider alternatives

4.3 AI Decision Documentation (For High-Risk)

When AI influences important decisions, document:

# AI-Assisted Decision Record

Date: 2026-01-15
Decision: [What was decided]
AI System Used: [Which AI tool]
AI Output: [What the AI said/recommended]
Human Review: [Who reviewed it]
Final Decision: [What was actually decided]
Justification: [Why this decision was made]

Signed: _______________

5. Free and Low-Cost Tools

5.1 Documentation

  • Google Sheets/Excel: AI inventory, risk tracking
  • Google Docs/Notion: Policies, procedures
  • GitHub/GitLab: Version control for AI code

5.2 Monitoring

  • Cloud provider built-in: AWS CloudWatch, Azure Monitor, GCP Logging
  • Open source: Prometheus + Grafana (free, self-hosted)

5.3 Compliance

  • NIST AI RMF Playbook: Free guidance from NIST
  • EU AI Act templates: Free from industry associations
  • This framework: Adapt enterprise templates for SMB scale

6. When to Get Help

6.1 DIY vs. Get Help

SituationDIY Feasible?Consider Getting Help
Using ChatGPT for internal tasks✓ YesIf handling sensitive data
Building AI into your productMaybeLikely yes
AI makes decisions about peopleNoYes - legal review needed
Selling into regulated industriesNoYes - compliance review
Large-scale AI deploymentNoYes - architecture review

6.2 Types of Help Available

Help TypeCost RangeWhen to Use
Legal review$2,000-10,000High-risk AI, EU market
Compliance consultant$5,000-25,000Building compliance program
ISO 42001 certification$10,000-50,000Enterprise customers require
Security assessment$5,000-20,000Handling sensitive data

7. Common SMB Mistakes to Avoid

7.1 Top 10 Mistakes

MistakeWhy It's BadHow to Avoid
No AI inventoryCan't manage what you don't knowDo Step 1 first
Ignoring EU AI ActFines up to 7% of revenueDo the basics
Putting secrets in AIData breach riskTrain employees
No human reviewAI makes mistakesRequire review
Using unapproved toolsShadow AI riskCreate approved list
No vendor vettingThird-party riskBasic checks
Assuming AI is always rightHallucinations happenVerify outputs
No documentationAudit failureKeep simple records
Over-engineeringWastes limited resourcesStart simple
Doing nothingFalling behind competitorsStart today

8. Growth Path

8.1 As You Grow, Add More

StageTeam SizeAI Governance Level
Startup1-10This quick-start guide
Growing SMB10-50Add formal policies
Scaling SMB50-200Add dedicated owner
Pre-Enterprise200-500Consider full framework
Enterprise500+Full enterprise framework

8.2 Maturity Progression

Level 1: Ad-Hoc          Level 2: Basic           Level 3: Managed
(Where many SMBs start) (This guide gets you (Next step)
here)

• No inventory • AI inventory exists • Inventory automated
• No policies • Basic policy • Full policy set
• No oversight • Someone responsible • Governance process
• No documentation • Basic documentation • Complete records

9. Quick Reference Card

9.1 SMB AI Governance Cheat Sheet

MUST DO (Legal Requirements):

  • Don't use prohibited AI (social scoring, etc.)
  • Disclose AI use to customers where required
  • Keep records of AI decisions (especially high-risk)
  • Train employees on AI use

SHOULD DO (Risk Management):

  • Maintain AI inventory
  • Vet AI vendors before use
  • Have approval process for new AI
  • Review AI outputs before publishing
  • Document AI-assisted decisions

NICE TO HAVE (Maturity):

  • Formal AI policy
  • Regular AI reviews
  • Monitoring dashboards
  • Incident response plan

9.2 Emergency Contacts

If AI Goes Wrong:

  1. Stop using the AI system
  2. Document what happened
  3. Contact your vendor
  4. Notify affected parties
  5. Report serious incidents to authorities (EU AI Act)

10. Checklist: Your First 30 Days

Week 1

  • Complete AI inventory spreadsheet
  • Classify each AI by risk level
  • Identify high-risk AI (if any)

Week 2

  • Draft one-page AI policy
  • Get leadership sign-off on policy
  • Communicate policy to employees

Week 3

  • Complete vendor assessment for top 3 AI vendors
  • Identify any gaps in vendor compliance
  • Set up basic access controls

Week 4

  • Brief employees on AI policy
  • Set up quarterly review calendar
  • Document your governance setup
  • Celebrate! You now have basic AI governance.

Document Control

VersionDateAuthorChanges
1.02025-06-15AI Governance OfficeInitial release

Need More Help?


This guide is designed to be completed in under 30 hours. Start today!


CODITECT AI Risk Management Framework

Document ID: AI-RMF-17 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel