Skip to main content

1-2-3 Quick Start: CODITECT Document Management System

Complete Setup, Deployment & Operations Guide

Product: CODITECT Enterprise Content and Document Management System Owner: AZ1.AI Inc Platform: Google Cloud Platform (GCP) / Kubernetes Version: 1.0.0 Last Updated: February 20, 2026


Executive Summary

The CODITECT Document Management System (DMS) is an enterprise-grade platform for AI-powered document lifecycle management. Built on FastAPI (Python) and React (TypeScript), it delivers semantic vector search via pgvector, intelligent document chunking with GraphRAG relationships, real-time monitoring, and multi-tenant RBAC security.

Key Results:

  • 93% reduction in document retrieval time (financial services)
  • 17% increase in compliance accuracy (regulated industries)
  • 80% reduction in document review time (legal/regulatory)
  • 2-3x ROI within the first year

Architecture at a Glance:

LayerTechnologyPurpose
FrontendReact 18 + TypeScript + Vite + TailwindCSSDashboard, search UI, monitoring
APIFastAPI 0.104+ (Python 3.10+)REST API, authentication, rate limiting
SearchPostgreSQL 15 + pgvectorSemantic vector search, hybrid search
ProcessingCelery + RedisBackground document processing, task queues
MetricsTimescaleDB + PrometheusTime-series metrics, alerting
InfrastructureGKE + Cloud SQL + Memorystore + Cloud StorageProduction hosting
CI/CDGitHub ActionsAutomated build, test, deploy

Subscription Tiers:

TierDocumentsSearches/DayFeatures
Free10050Basic search
Pro10,000500GraphRAG, priority processing
EnterpriseUnlimitedUnlimitedSSO, dedicated support, SLA

Introduction

This guide consolidates every process needed to take the CODITECT DMS from source code to a fully operational production deployment. It is organized into three phases:

  1. Setup — Local development environment, prerequisites, configuration
  2. Deployment — GCP infrastructure provisioning, Kubernetes deployment, CI/CD pipeline
  3. Operations — Day-to-day administration, monitoring, security, backup, SDK integration

Each phase is broken into numbered steps. Follow them sequentially for a first-time deployment, or jump to a specific section for reference.

Prerequisites for this guide:

  • Access to the coditect-ai GitHub organization
  • A Google Cloud Platform account with billing enabled
  • A workstation with macOS or Linux
  • Basic familiarity with Docker, Kubernetes, and Python/Node.js development

Related Documentation:

DocumentLocation
Getting Started (End Users)docs/guides/getting-started.md
Deployment Guide (Infrastructure)docs/guides/deployment-guide.md
Operations Guide (Day-to-Day)docs/guides/operations-guide.md
SDK Integration Guidedocs/guides/sdk-guide.md
CODITECT Integration Guidedocs/guides/coditect-integration-guide.md
API Referencedocs/api/api-reference.md
OpenAPI Specificationdocs/api/openapi.yaml
Production Readiness Checklistdocs/production-readiness-checklist.md
Disaster Recovery Runbookdocs/disaster-recovery-runbook.md

Phase 1: Setup — Local Development Environment

Step 1.1: Install Prerequisites

Required Software:

ToolMinimum VersionInstallation
Python3.10+brew install python@3.10
Node.js18+brew install node@18
npm9+Bundled with Node.js
PostgreSQL14+ with pgvectorbrew install postgresql@14
Redis5+brew install redis
Git2.30+brew install git
Google Cloud SDKLatestbrew install google-cloud-sdk
kubectlLatestbrew install kubectl
Helm3+brew install helm
Docker20+Install Docker Desktop

Install all prerequisites at once (macOS):

# Core tools
brew install python@3.10 node@18 postgresql@14 redis git

# Cloud/container tools
brew install google-cloud-sdk kubectl helm

# PostgreSQL extensions
# pgvector: install after PostgreSQL is running
# TimescaleDB: brew install timescaledb

Verify installations:

python3 --version    # Python 3.10+
node --version # v18+
npm --version # 9+
psql --version # 14+
redis-server --version # 5+
gcloud --version # Latest
kubectl version --client
helm version
docker --version

Step 1.2: Clone and Configure the Repository

# Navigate to CODITECT rollout master
cd /path/to/coditect-rollout-master/submodules/docs/coditect-documentation/coditect-document-management

# Or if working from standalone clone:
# git clone git@github.com:coditect-ai/coditect-document-management.git
# cd coditect-document-management

Step 1.3: Set Up Python Backend

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate

# Install Python dependencies
pip install -r requirements.txt -r requirements-dev.txt

# Verify installation
python3 -c "import fastapi; print(f'FastAPI {fastapi.__version__}')"

Step 1.4: Set Up Node.js Frontend

# Install all Node.js dependencies (monorepo)
npm run install:all

# Verify installation
cd src/frontend && npm run type-check && cd ../..

Step 1.5: Configure Local Database

# Start PostgreSQL
brew services start postgresql@14

# Create database and user
psql postgres -c "CREATE USER dms_user WITH PASSWORD 'local_dev_password';"
psql postgres -c "CREATE DATABASE dms OWNER dms_user;"

# Enable pgvector extension
psql dms -c "CREATE EXTENSION IF NOT EXISTS vector;"

# Enable TimescaleDB extension (if installed)
psql dms -c "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;"

Step 1.6: Configure Local Redis

# Start Redis
brew services start redis

# Verify connection
redis-cli ping # Should return: PONG

Step 1.7: Configure Environment Variables

# Copy example environment file
cp .env.example .env

Edit .env with the following values:

# Application
APP_NAME="CODITECT DMS"
APP_VERSION="1.0.0"
ENVIRONMENT="development"
LOG_LEVEL="DEBUG"

# Database
DATABASE_URL="postgresql://dms_user:local_dev_password@localhost:5432/dms"

# Redis
REDIS_URL="redis://localhost:6379"

# Celery
CELERY_BROKER_URL="redis://localhost:6379/0"
CELERY_RESULT_BACKEND="redis://localhost:6379/0"

# JWT Authentication
JWT_SECRET="your-development-secret-key-change-in-production"
JWT_ALGORITHM="HS256"
JWT_EXPIRATION_HOURS=24

# CORS
CORS_ORIGINS="http://localhost:5173,http://localhost:3000"

# API Keys (development)
CODITECT_API_KEY="dms_dev_sk_development_key_for_local_testing"

Step 1.8: Initialize Database Schema

# Run Alembic migrations
alembic upgrade head

# Verify schema
psql dms -c "\dt" | head -20

Step 1.9: Start Development Servers

Terminal 1 — Backend API:

source .venv/bin/activate
uvicorn src.backend.main:app --reload --host 0.0.0.0 --port 8000

Terminal 2 — Frontend:

npm run dev:frontend

Terminal 3 — Celery Worker (for background processing):

source .venv/bin/activate
celery -A src.backend.tasks worker -l info

Verify everything is running:

ServiceURLExpected
Backend APIhttp://localhost:8000/health{"status": "healthy"}
Swagger UIhttp://localhost:8000/docsAPI documentation
ReDochttp://localhost:8000/redocAPI documentation
Frontendhttp://localhost:5173Dashboard UI

Step 1.10: Run Tests

# Backend tests
source .venv/bin/activate
pytest

# Backend with coverage
pytest --cov=src/backend --cov-report=html

# Frontend tests
npm run test:frontend

# All tests
npm run test:all

# Lint
npm run backend:lint
npm run lint:frontend

# Type checking
npm run backend:type-check
cd src/frontend && npm run type-check

Phase 2: Deployment — GCP Production Infrastructure

Step 2.1: Authenticate with Google Cloud

# Login to GCP
gcloud auth login
gcloud auth application-default login

# Set project
gcloud config set project coditect-prod

# Verify
gcloud config get-value project # Should return: coditect-prod

Step 2.2: Enable Required GCP APIs

gcloud services enable \
container.googleapis.com \
sqladmin.googleapis.com \
redis.googleapis.com \
secretmanager.googleapis.com \
cloudarmor.googleapis.com \
monitoring.googleapis.com \
logging.googleapis.com \
artifactregistry.googleapis.com

Step 2.3: Create GKE Cluster

# Create production cluster
gcloud container clusters create coditect-prod-cluster \
--zone=us-central1-a \
--machine-type=n2-standard-4 \
--num-nodes=3 \
--min-nodes=2 \
--max-nodes=10 \
--enable-autoscaling \
--enable-network-policy \
--enable-ip-alias \
--enable-shielded-nodes \
--workload-pool=coditect-prod.svc.id.goog

# Get cluster credentials
gcloud container clusters get-credentials coditect-prod-cluster \
--zone=us-central1-a

# Verify
kubectl get nodes

Step 2.4: Create Cloud SQL Instance (PostgreSQL 15 + pgvector)

# Create PostgreSQL instance
gcloud sql instances create coditect-dms-db \
--database-version=POSTGRES_15 \
--tier=db-custom-4-16384 \
--region=us-central1 \
--availability-type=regional \
--storage-size=100GB \
--storage-auto-increase \
--backup-start-time=02:00 \
--enable-point-in-time-recovery

# Create database
gcloud sql databases create dms \
--instance=coditect-dms-db

# Create user (save this password securely)
DMS_DB_PASSWORD=$(openssl rand -base64 32)
gcloud sql users create dms_user \
--instance=coditect-dms-db \
--password="$DMS_DB_PASSWORD"
echo "Database password: $DMS_DB_PASSWORD" # Save this securely

# Enable pgvector extension
gcloud sql connect coditect-dms-db --user=postgres
# In the psql prompt:
# CREATE EXTENSION IF NOT EXISTS vector;
# q

Step 2.5: Create Memorystore Redis Instance

gcloud redis instances create coditect-dms-cache \
--size=2 \
--region=us-central1 \
--redis-version=redis_7_0 \
--tier=standard

# Get Redis host IP (needed for ConfigMap)
REDIS_HOST=$(gcloud redis instances describe coditect-dms-cache \
--region=us-central1 --format='value(host)')
echo "Redis Host: $REDIS_HOST"

Step 2.6: Create Cloud Storage Bucket

# Create document storage bucket
gsutil mb -l us-central1 gs://coditect-dms-documents

# Set lifecycle policy (optional: auto-delete old versions after 90 days)
cat > /tmp/lifecycle.json << 'EOF'
{
"lifecycle": {
"rule": [
{
"action": {"type": "Delete"},
"condition": {"age": 90, "isLive": false}
}
]
}
}
EOF
gsutil lifecycle set /tmp/lifecycle.json gs://coditect-dms-documents

Step 2.7: Configure Secrets in Secret Manager

# Database URL
echo -n "postgresql://dms_user:${DMS_DB_PASSWORD}@/dms?host=/cloudsql/coditect-prod:us-central1:coditect-dms-db" | \
gcloud secrets create dms-database-url --data-file=-

# JWT Secret
openssl rand -base64 64 | \
gcloud secrets create dms-jwt-secret --data-file=-

# OpenAI API Key (for embeddings)
echo -n "sk-your-openai-key-here" | \
gcloud secrets create dms-openai-key --data-file=-

# Stripe Secret Key (for billing)
echo -n "sk_live_your-stripe-key-here" | \
gcloud secrets create dms-stripe-secret --data-file=-

Step 2.8: Set Up Workload Identity and IAM

# Create Kubernetes namespace
kubectl create namespace coditect-dms

# Create Kubernetes service account
kubectl create serviceaccount dms-api -n coditect-dms

# Create GCP service account
gcloud iam service-accounts create dms-api-sa \
--display-name="DMS API Service Account"

# Bind Kubernetes SA to GCP SA (Workload Identity)
gcloud iam service-accounts add-iam-policy-binding \
dms-api-sa@coditect-prod.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:coditect-prod.svc.id.goog[coditect-dms/dms-api]"

# Annotate the Kubernetes SA
kubectl annotate serviceaccount dms-api \
--namespace coditect-dms \
iam.gke.io/gcp-service-account=dms-api-sa@coditect-prod.iam.gserviceaccount.com

# Grant secret access to the GCP SA
for SECRET in dms-database-url dms-jwt-secret dms-openai-key dms-stripe-secret; do
gcloud secrets add-iam-policy-binding $SECRET \
--role="roles/secretmanager.secretAccessor" \
--member="serviceAccount:dms-api-sa@coditect-prod.iam.gserviceaccount.com"
done

# Grant Cloud SQL access
gcloud projects add-iam-policy-binding coditect-prod \
--role="roles/cloudsql.client" \
--member="serviceAccount:dms-api-sa@coditect-prod.iam.gserviceaccount.com"

# Grant Cloud Storage access
gsutil iam ch serviceAccount:dms-api-sa@coditect-prod.iam.gserviceaccount.com:objectAdmin \
gs://coditect-dms-documents

Step 2.9: Deploy Kubernetes Manifests

Apply all manifests in order:

# 1. Namespace and ConfigMap
kubectl apply -f deploy/kubernetes/namespace.yaml

# 2. Secrets (synced from Secret Manager)
kubectl apply -f deploy/kubernetes/secrets.yaml

# 3. API Deployment + Service
kubectl apply -f deploy/kubernetes/api-deployment.yaml

# 4. Celery Worker Deployment
kubectl apply -f deploy/kubernetes/workers.yaml

# 5. Horizontal Pod Autoscaler
kubectl apply -f deploy/kubernetes/hpa.yaml

# 6. Ingress + Managed Certificate
kubectl apply -f deploy/kubernetes/ingress.yaml

# 7. Network Policies
kubectl apply -f deploy/kubernetes/network-policy.yaml

# Or apply all at once:
kubectl apply -f deploy/kubernetes/

Step 2.10: Run Database Migrations in Production

# Start Cloud SQL Proxy
cloud_sql_proxy -instances=coditect-prod:us-central1:coditect-dms-db=tcp:5432 &

# Run migrations
DATABASE_URL="postgresql://dms_user:${DMS_DB_PASSWORD}@localhost:5432/dms" \
alembic upgrade head

# Verify
DATABASE_URL="postgresql://dms_user:${DMS_DB_PASSWORD}@localhost:5432/dms" \
alembic current

# Stop proxy
kill %1

Step 2.11: Configure Cloud Armor (WAF/DDoS Protection)

# Create security policy
gcloud compute security-policies create dms-security-policy \
--description="DMS API Security Policy"

# Rate limiting: 1000 requests/min per IP
gcloud compute security-policies rules create 1000 \
--security-policy=dms-security-policy \
--expression="true" \
--action=rate-based-ban \
--rate-limit-threshold-count=1000 \
--rate-limit-threshold-interval-sec=60 \
--ban-duration-sec=600

# SQL injection protection
gcloud compute security-policies rules create 2000 \
--security-policy=dms-security-policy \
--expression="evaluatePreconfiguredExpr('sqli-stable')" \
--action=deny-403

# XSS protection
gcloud compute security-policies rules create 3000 \
--security-policy=dms-security-policy \
--expression="evaluatePreconfiguredExpr('xss-stable')" \
--action=deny-403

Step 2.12: Set Up CI/CD Pipeline (GitHub Actions)

The CI/CD pipeline is defined in .github/workflows/deploy.yaml. It triggers on pushes to main that modify src/ or deploy/.

Pipeline steps:

  1. Checkout code
  2. Authenticate to GCP via Workload Identity Federation
  3. Build Docker image with commit SHA tag
  4. Push to Google Container Registry
  5. Deploy to GKE via kubectl set image
  6. Wait for rollout completion (300s timeout)

Required GitHub Secrets:

SecretPurpose
GCP_SA_KEYGCP service account credentials JSON
GCP_PROJECT_IDcoditect-prod

Set secrets via CLI:

gh secret set GCP_SA_KEY < /path/to/service-account-key.json
gh secret set GCP_PROJECT_ID -b "coditect-prod"

Step 2.13: Verify Production Deployment

# Check pod status
kubectl get pods -n coditect-dms

# Check logs
kubectl logs -f deployment/coditect-dms-api -n coditect-dms

# Health check
curl https://dms-api.coditect.ai/health

# Readiness check
curl https://dms-api.coditect.ai/health/ready

# Liveness check
curl https://dms-api.coditect.ai/health/live

Expected output from /health:

{
"status": "healthy",
"version": "1.0.0",
"database": "connected",
"redis": "connected",
"uptime": "..."
}

Phase 3: Operations, Monitoring, Security & Integration

Step 3.1: Set Up Monitoring (Prometheus + Grafana)

# Add Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install kube-prometheus-stack
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--values deploy/kubernetes/monitoring-values.yaml

# Apply ServiceMonitor for DMS
kubectl apply -f deploy/kubernetes/monitoring.yaml

# Access Grafana
kubectl get secret -n monitoring monitoring-grafana \
-o jsonpath="{.data.admin-password}" | base64 --decode
# Note the password

kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
# Open http://localhost:3000 (admin / <password>)

Key Dashboards to Configure:

DashboardMetrics
API OverviewRequest rate, latency (P50/P95/P99), error rate
Resource UtilizationCPU, memory, disk by pod
Business MetricsDocuments processed, searches executed, embeddings generated
SLO DashboardAvailability (99.9% target), latency (<200ms P95 target)

Alert Thresholds:

MetricWarningCriticalAction
API Error Rate>5%>10%Check logs, scale pods
P95 Latency>1s>2sScale pods, optimize queries
CPU Usage>70%>85%Scale out
Memory Usage>80%>90%Scale up/out
Queue Depth>100>500Scale workers
Disk Usage>70%>85%Expand storage

Step 3.2: Daily Operations Checklist

Morning routine (5 minutes):

# 1. System health
curl https://dms-api.coditect.ai/health/ready

# 2. Pod status
kubectl get pods -n coditect-dms

# 3. Check processing queue
kubectl exec -it deployment/dms-worker -n coditect-dms -- \
celery -A src.backend.tasks inspect active

# 4. Verify backups ran
gcloud sql backups list --instance=coditect-dms-db | head -5

# 5. Check Grafana alerts
# Open https://monitoring.coditect.ai

Step 3.3: Scaling Operations

Auto-scaling is configured via HPA (Step 2.9). For manual adjustments:

# Scale API pods
kubectl scale deployment coditect-dms-api --replicas=10 -n coditect-dms

# Scale workers
kubectl scale deployment dms-worker --replicas=5 -n coditect-dms

# Adjust HPA limits
kubectl patch hpa dms-api-hpa -n coditect-dms \
-p '{"spec":{"maxReplicas":30}}'

# Check current HPA status
kubectl get hpa -n coditect-dms

Step 3.4: Rolling Updates and Rollbacks

Deploy a new version:

# Update image
kubectl set image deployment/coditect-dms-api \
api=gcr.io/coditect-prod/dms-api:NEW_VERSION \
-n coditect-dms

# Monitor rollout
kubectl rollout status deployment/coditect-dms-api -n coditect-dms

Rollback if issues detected:

# Rollback to previous version
kubectl rollout undo deployment/coditect-dms-api -n coditect-dms

# Rollback to specific revision
kubectl rollout undo deployment/coditect-dms-api -n coditect-dms --to-revision=5

# Restart all pods (preserves current version)
kubectl rollout restart deployment/coditect-dms-api -n coditect-dms

Step 3.5: Database Operations

Connect to production database:

# Start Cloud SQL Proxy
cloud_sql_proxy -instances=coditect-prod:us-central1:coditect-dms-db=tcp:5432 &

# Connect
psql "postgresql://dms_user:PASSWORD@localhost:5432/dms"

Run migrations:

alembic current          # Check current version
alembic upgrade head # Run pending migrations
alembic downgrade -1 # Rollback one step

Database maintenance:

-- Vacuum tables
VACUUM ANALYZE document;
VACUUM ANALYZE chunk;

-- Reindex vector search index
REINDEX INDEX CONCURRENTLY idx_chunk_embedding;

-- Check table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;

Step 3.6: Backup and Disaster Recovery

Automated backups are enabled (Step 2.4) with daily backups at 02:00 UTC and 7-day retention.

Manual backup:

gcloud sql backups create \
--instance=coditect-dms-db \
--description="Pre-deployment backup $(date +%Y-%m-%d)"

# List backups
gcloud sql backups list --instance=coditect-dms-db

Weekly backup verification:

# 1. Clone the database instance
gcloud sql instances create dms-backup-test \
--source-instance=coditect-dms-db --clone

# 2. Connect and verify data
cloud_sql_proxy -instances=coditect-prod:us-central1:dms-backup-test=tcp:5433 &
psql "postgresql://dms_user:PASSWORD@localhost:5433/dms" \
-c "SELECT COUNT(*) FROM document;"

# 3. Clean up test instance
gcloud sql instances delete dms-backup-test --quiet

For full disaster recovery procedures: See docs/disaster-recovery-runbook.md

Step 3.7: Security Operations

Rotate JWT secret:

# Generate new secret
NEW_SECRET=$(openssl rand -base64 64)

# Update in Secret Manager
echo -n "$NEW_SECRET" | \
gcloud secrets versions add dms-jwt-secret --data-file=-

# Restart pods to pick up new secret
kubectl rollout restart deployment/coditect-dms-api -n coditect-dms

Rotate API keys:

# List current keys
curl https://dms-api.coditect.ai/api/v1/tenants/me/api-keys \
-H "Authorization: Bearer $ADMIN_TOKEN"

# Create new key
curl -X POST https://dms-api.coditect.ai/api/v1/tenants/me/api-keys \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "Rotated Key", "scopes": ["read", "search", "write"]}'

# Delete old key
curl -X DELETE https://dms-api.coditect.ai/api/v1/tenants/me/api-keys/{key_id} \
-H "Authorization: Bearer $ADMIN_TOKEN"

Review audit logs:

gcloud logging read 'logName:"cloudaudit.googleapis.com"' \
--project=coditect-prod \
--limit=100

Step 3.8: Cache Management

# Check cache stats
redis-cli -h $REDIS_HOST INFO stats
redis-cli -h $REDIS_HOST INFO memory

# Clear search cache
redis-cli -h $REDIS_HOST KEYS "search:*" | xargs redis-cli -h $REDIS_HOST DEL

# Flush all cache (use with caution)
redis-cli -h $REDIS_HOST FLUSHDB

Step 3.9: User and Tenant Management

Add user to tenant:

curl -X POST https://dms-api.coditect.ai/api/v1/tenants/me/users \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"email": "new.user@company.com",
"name": "New User",
"role": "editor"
}'

Available roles: owner, admin, editor, viewer, api_only

Reset user password:

curl -X POST https://dms-api.coditect.ai/api/v1/auth/password-reset \
-H "Content-Type: application/json" \
-d '{"email": "user@company.com"}'

Deactivate user:

curl -X PATCH https://dms-api.coditect.ai/api/v1/tenants/me/users/{user_id} \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"is_active": false}'

Update tenant subscription tier:

-- Connect via Cloud SQL Proxy
UPDATE tenant SET subscription_tier = 'enterprise' WHERE id = 'tenant-uuid';

Step 3.10: SDK Integration

Python SDK:

pip install coditect-dms
from coditect import DMSClient

client = DMSClient(api_key="dms_prod_sk_...")

# Upload document
doc = client.documents.upload("./my-document.md")
doc = client.documents.wait_for_processing(doc.id, timeout=120)

# Search
results = client.search("How do I implement authentication?")
for result in results:
print(f"{result.score:.2f}: {result.document_title}")

TypeScript SDK:

npm install @coditect/dms-sdk
import { DMSClient } from '@coditect/dms-sdk';

const client = new DMSClient({ apiKey: 'dms_prod_sk_...' });

const results = await client.search({
query: 'authentication best practices',
mode: 'hybrid',
topK: 10,
});

results.forEach(r => console.log(`${r.score}: ${r.documentTitle}`));

Environment variables for SDKs:

export CODITECT_API_KEY="dms_prod_sk_..."
export CODITECT_API_URL="https://dms-api.coditect.ai/api/v1"
export CODITECT_TIMEOUT="30"

Step 3.11: CODITECT Framework Integration (Claude Code Hooks)

To integrate DMS with Claude Code for automatic frontmatter management:

Add to .claude/settings.json:

{
"hooks": {
"PreToolUse": [
{
"matcher": {"tool_name": "Write"},
"hooks": [{
"type": "command",
"command": "python3 scripts/coditect_integration/document_hooks.py --pre",
"timeout": 10000
}]
}
],
"PostToolUse": [
{
"matcher": {"tool_name": "Write"},
"hooks": [{
"type": "command",
"command": "python3 scripts/coditect_integration/document_hooks.py --post",
"timeout": 10000
}]
}
]
}
}

Set up pre-commit validation:

cp scripts/coditect_integration/pre_commit.py .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit

Step 3.12: Document Reprocessing

Reprocess a single document:

curl -X POST https://dms-api.coditect.ai/api/v1/documents/{doc_id}/reprocess \
-H "X-API-Key: $API_KEY"

Bulk reprocess (via database):

UPDATE document
SET status = 'pending', updated_at = NOW()
WHERE tenant_id = 'tenant-uuid'
AND document_type = 'guide';

Clear stuck processing jobs:

kubectl exec -it deployment/dms-worker -n coditect-dms -- \
celery -A src.backend.tasks purge -f

Step 3.13: Maintenance Windows

Before maintenance:

# 1. Notify users (24h before) — update status page, send email
# 2. Scale down
kubectl scale deployment coditect-dms-api --replicas=1 -n coditect-dms
kubectl scale deployment dms-worker --replicas=0 -n coditect-dms

After maintenance:

# 1. Verify database
alembic current

# 2. Scale up
kubectl scale deployment coditect-dms-api --replicas=3 -n coditect-dms
kubectl scale deployment dms-worker --replicas=3 -n coditect-dms

# 3. Verify health
curl https://dms-api.coditect.ai/health/ready

# 4. Update status page

Quick Reference

API Endpoints

EndpointMethodDescription
/healthGETBasic health check
/health/readyGETKubernetes readiness probe
/health/liveGETKubernetes liveness probe
/api/v1/searchPOSTSemantic/hybrid search
/api/v1/search/graphragPOSTGraphRAG traversal
/api/v1/documentsGETList documents
/api/v1/documentsPOSTCreate document
/api/v1/documents/uploadPOSTUpload file
/api/v1/documents/{id}GETGet document details
/api/v1/documents/{id}/chunksGETGet document chunks
/api/v1/documents/{id}/reprocessPOSTReprocess document
/api/v1/analytics/dashboardGETDashboard metrics
/api/v1/analytics/metricsPOSTQuery time-series
/api/v1/analytics/usageGETUsage for billing
/api/v1/tenantsPOSTCreate tenant
/api/v1/tenants/meGETGet current tenant
/api/v1/tenants/me/api-keysPOSTCreate API key
/api/v1/tenants/me/usersPOSTAdd user
/api/v1/auth/password-resetPOSTReset password

Authentication

MethodHeaderFormat
JWT TokenAuthorizationBearer <token>
API KeyX-API-Keydms_{env}_{type}_{32-char}

Search Modes

ModeUse Case
hybrid (recommended)General search — combines semantic + keyword
vectorConceptual queries, finding similar content
keywordExact phrase matching, specific terms
graphragExplore relationships between chunks

Troubleshooting

SymptomLikely CauseFix
Pod CrashLoopBackOffDB connection, missing secrets, OOMkubectl describe pod <name> -n coditect-dms
401 UnauthorizedInvalid/expired API key or JWTVerify key, check scopes, regenerate
Search returns no resultsDocuments not processedCheck status; lower min_score; try different mode
Slow searchesVector index fragmentedREINDEX INDEX CONCURRENTLY idx_chunk_embedding;
Queue backlogWorker capacityScale workers: kubectl scale deployment dms-worker --replicas=10
High latencyResource pressureScale API pods; check DB connections; check Redis

Support Contacts

RoleContactWhen
On-Call EngineerPagerDuty24/7 for P0/P1
Engineering Leadeng-lead@az1.aiBusiness hours
Security Leadsecurity@az1.aiSecurity issues
CTO1@az1.aiEscalations
General Supportsupport@az1.aiNon-urgent
Salessales@az1.aiEnterprise pricing

Document Version: 1.0.0 Created: February 20, 2026 Author: AZ1.AI Inc / CODITECT Core Team