Skip to main content

ADR-018: CODI Dynamic Command Architecture (v4)

Document Specification Block

Document: ADR-018-v4-codi-dynamic-command-architecture
Version: 1.0.0
Purpose: Define the dynamic command loading architecture for CODI CLI using JSON libraries with optional WASM modules
Audience: Platform Engineers, AI Agents, DevOps Teams, Frontend Developers
Date Created: 2025-08-29
Date Modified: 2025-08-29
Date Released: DRAFT
QA Reviewed: Pending
Status: PROPOSED

Table of Contents

1. Document Information

FieldValue
ADR NumberADR-018
TitleCODI Dynamic Command Architecture
StatusProposed
Date Created2025-08-29
Last Modified2025-08-29
Version1.0.0
Decision MakersPlatform Architecture Team
StakeholdersAll CODITECT Users, AI Agents, DevOps Teams

2. Purpose of this ADR

This ADR serves dual purposes:

  • For Humans 👥: Understand how CODI transforms from a monolithic 25MB CLI to a lightweight 5MB core that dynamically loads command libraries as needed
  • For AI Agents 🤖: Implement the JSON-based command library system with SQLite caching across Cloud Run, WASM, and local environments

3. User Story Context

As a CODITECT platform user,
I want CODI to start instantly with minimal commands and load additional capabilities on demand,
So that I have fast startup times, reduced bandwidth usage, and access to unlimited commands without reinstalling CODI.

📋 Acceptance Criteria:

  • CODI core binary is under 5MB with 8-10 built-in commands
  • Additional commands load dynamically in under 250ms on first use
  • Cached commands execute in under 105ms
  • Command libraries persist across container restarts
  • Works identically in Cloud Run, browser WASM, and local development
  • Supports offline operation with cached libraries

4. Executive Summary

🏢 For Business Stakeholders

Imagine CODI as a Swiss Army knife that starts with just a blade and corkscrew, but can instantly grow new tools when you need them. Instead of carrying a heavy toolbox everywhere, you have a lightweight tool that downloads exactly what you need, when you need it.

Business Value: 97.5% reduction in infrastructure costs ($1000/month → $25/month) by eliminating persistent containers
Key Decision: Transform CODI from a monolithic binary to a dynamic command loader using JSON libraries

💻 For Technical Readers

Technical Summary: CODI becomes a 5MB core runtime that dynamically loads command libraries as JSON bundles with optional WASM modules, cached in SQLite with environment-specific persistence strategies.

5. Visual Overview

5.1 High-Level Concept

For Everyone: The Smart Toolbox Analogy

Technical View: Architecture Comparison

5.2 Technical Architecture

Command Execution Journey

Detailed Component Architecture

5.3 Library Bundle Structure

Visual Library Anatomy

Library Size Visualization

6. Background & Problem

6.1 Current State Problems

6.2 Requirements

7. Decision

7.1 Chosen Solution: JSON Libraries with Dynamic Loading

Decision Matrix

7.2 Command Library Format

{
"library": {
"name": "prompt-engineering",
"version": "1.0.0",
"description": "AI prompt engineering and management commands",
"permissions": ["prompts:read", "prompts:write"],
"dependencies": ["core", "ai-ops"]
},

"commands": {
"prompt": {
"description": "Prompt engineering toolkit",
"subcommands": {
"generate": {
"description": "Generate optimized prompts for AI agents",
"arguments": [
{
"name": "task",
"required": true,
"type": "string",
"description": "Task description for prompt generation"
}
],
"options": [
{
"name": "model",
"short": "m",
"type": "enum",
"values": ["claude", "gpt4", "gemini", "llama"],
"default": "claude",
"description": "Target AI model"
},
{
"name": "style",
"short": "s",
"type": "enum",
"values": ["technical", "creative", "analytical", "instructional"],
"default": "technical"
},
{
"name": "max-tokens",
"type": "integer",
"default": 2000,
"description": "Maximum tokens for generated prompt"
}
],
"handler": {
"type": "websocket",
"endpoint": "/api/v1/prompts/generate",
"method": "POST"
}
},

"test": {
"description": "Test prompts against multiple models",
"arguments": [
{
"name": "prompt_file",
"required": true,
"type": "file",
"description": "Path to prompt file or '-' for stdin"
}
],
"options": [
{
"name": "models",
"short": "m",
"type": "array",
"default": ["claude", "gpt4"],
"description": "Models to test against"
},
{
"name": "iterations",
"short": "i",
"type": "integer",
"default": 3,
"description": "Number of test iterations"
},
{
"name": "compare",
"short": "c",
"type": "boolean",
"default": true,
"description": "Show comparison table"
}
],
"handler": {
"type": "wasm",
"function": "test_prompts_parallel",
"fallback": {
"type": "websocket",
"endpoint": "/api/v1/prompts/test"
}
}
},

"library": {
"description": "Manage prompt libraries",
"subcommands": {
"list": {
"description": "List available prompt libraries",
"options": [
{
"name": "filter",
"type": "string",
"description": "Filter by category or tag"
}
],
"handler": {
"type": "websocket",
"endpoint": "/api/v1/prompts/libraries"
}
},

"import": {
"description": "Import a prompt library",
"arguments": [
{
"name": "source",
"required": true,
"type": "string",
"description": "GitHub URL or local path"
}
],
"handler": {
"type": "websocket",
"endpoint": "/api/v1/prompts/libraries/import"
}
}
}
}
}
}
}
}

8. Implementation Blueprint

8.1 Core CODI Binary (Rust)

// src/main.rs
use clap::{Command, Arg};
use tokio;

#[tokio::main]
async fn main() -> Result<()> {
let app = Command::new("codi")
.version("1.0.0")
.about("CODITECT Dynamic CLI")
.subcommand(
Command::new("log")
.about("Log messages to CODITECT")
)
.subcommand(
Command::new("auth")
.about("Authentication commands")
)
// ... 6-8 more core commands
.subcommand(
Command::new("help")
.about("Show available commands")
);

let matches = app.get_matches();

match matches.subcommand() {
Some(("log", args)) => handle_log(args).await,
Some(("auth", args)) => handle_auth(args).await,
Some(("help", _)) => show_dynamic_help().await,
Some((cmd, args)) => handle_dynamic_command(cmd, args).await,
None => show_dynamic_help().await,
}
}

// src/dynamic_loader.rs
pub struct DynamicLoader {
cache: SqliteCache,
websocket: WebSocketClient,
runtime_mode: RuntimeMode,
}

impl DynamicLoader {
pub async fn load_command(&self, command: &str) -> Result<CommandLibrary> {
// Extract library name from command
let library_name = command.split('.').next().unwrap();

// Check cache first
if let Some(lib) = self.cache.get_library(library_name).await? {
if !lib.is_expired() {
return Ok(lib);
}
}

// Fetch from server
let library_json = self.websocket
.request_library(library_name)
.await?;

// Store based on runtime mode
match self.runtime_mode {
RuntimeMode::CloudRun => {
self.cache.store_with_gcs_backup(&library_json).await?;
}
RuntimeMode::WASM => {
self.store_in_indexeddb(&library_json).await?;
}
RuntimeMode::Local => {
self.cache.store(&library_json).await?;
}
}

Ok(CommandLibrary::from_json(library_json)?)
}
}

// src/storage/mod.rs
pub enum RuntimeMode {
CloudRun,
WASM,
Local,
}

impl RuntimeMode {
pub fn detect() -> Self {
if std::env::var("K_SERVICE").is_ok() {
RuntimeMode::CloudRun
} else if cfg!(target_arch = "wasm32") {
RuntimeMode::WASM
} else {
RuntimeMode::Local
}
}
}

8.2 Storage Implementation

Multi-Environment Storage Strategy

// src/storage/sqlite_cache.rs
pub struct SqliteCache {
conn: SqliteConnection,
gcs_mount: Option<PathBuf>,
}

impl SqliteCache {
pub async fn init(mode: RuntimeMode) -> Result<Self> {
let db_path = match mode {
RuntimeMode::CloudRun => {
// Use GCS FUSE mount
PathBuf::from("/var/lib/codi/cache.db")
}
RuntimeMode::Local => {
// Use local directory
dirs::data_dir()
.unwrap()
.join("coditect")
.join("codi")
.join("cache.db")
}
RuntimeMode::WASM => {
// This won't be used in WASM
PathBuf::from(":memory:")
}
};

let conn = SqliteConnection::connect(&db_path).await?;

// Initialize schema
sqlx::query!(
r#"
CREATE TABLE IF NOT EXISTS command_libraries (
id TEXT PRIMARY KEY,
version TEXT NOT NULL,
library_json BLOB NOT NULL,
metadata TEXT NOT NULL,
loaded_at INTEGER NOT NULL,
expires_at INTEGER,
size_bytes INTEGER NOT NULL,
hash TEXT NOT NULL,
UNIQUE(id, version)
)
"#
)
.execute(&conn)
.await?;

Ok(Self {
conn,
gcs_mount: if mode == RuntimeMode::CloudRun {
Some(PathBuf::from("/var/lib/codi"))
} else {
None
},
})
}

pub async fn get_library(&self, name: &str) -> Result<Option<LibraryData>> {
let row = sqlx::query_as!(
LibraryRow,
"SELECT * FROM command_libraries WHERE id = ? ORDER BY version DESC LIMIT 1",
name
)
.fetch_optional(&self.conn)
.await?;

match row {
Some(r) => {
let lib = LibraryData {
id: r.id,
version: r.version,
json: decompress(&r.library_json)?,
loaded_at: r.loaded_at,
expires_at: r.expires_at,
};
Ok(Some(lib))
}
None => Ok(None),
}
}
}

8.3 Browser WASM Implementation

// codi-wasm/src/storage.ts
import initSqlJs from 'sql.js';
import { openDB, DBSchema } from 'idb';

interface CODIDatabase extends DBSchema {
libraries: {
key: string;
value: {
id: string;
version: string;
json: string;
loadedAt: number;
expiresAt: number;
};
};
}

export class CODIWASMStorage {
private db: IDBDatabase;
private sql: any;

async init() {
// Initialize IndexedDB
this.db = await openDB<CODIDatabase>('codi-cache', 1, {
upgrade(db) {
db.createObjectStore('libraries', { keyPath: 'id' });
},
});

// Initialize SQL.js for complex queries
const SQL = await initSqlJs({
locateFile: file => `/wasm/${file}`
});

// Check if we have saved state
const savedDb = await this.loadFromIndexedDB();
this.sql = savedDb || new SQL.Database();

// Create schema if needed
this.sql.run(`
CREATE TABLE IF NOT EXISTS command_index (
command TEXT PRIMARY KEY,
library_id TEXT NOT NULL,
library_version TEXT NOT NULL,
handler_type TEXT NOT NULL,
endpoint TEXT,
permissions TEXT
)
`);
}

async storeLibrary(libraryJson: string) {
const library = JSON.parse(libraryJson);

// Store in IndexedDB
await this.db.put('libraries', {
id: library.library.name,
version: library.library.version,
json: libraryJson,
loadedAt: Date.now(),
expiresAt: Date.now() + (7 * 24 * 60 * 60 * 1000), // 7 days
});

// Update SQL index
for (const [cmdName, cmdDef] of Object.entries(library.commands)) {
for (const [subCmd, subDef] of Object.entries(cmdDef.subcommands)) {
this.sql.run(
`INSERT OR REPLACE INTO command_index VALUES (?, ?, ?, ?, ?, ?)`,
[`${cmdName}.${subCmd}`, library.library.name, library.library.version,
subDef.handler.type, subDef.handler.endpoint || null,
JSON.stringify(library.library.permissions)]
);
}
}

// Save SQL database to IndexedDB
await this.saveToIndexedDB();
}

async getLibrary(name: string): Promise<any | null> {
const lib = await this.db.get('libraries', name);
if (lib && lib.expiresAt > Date.now()) {
return JSON.parse(lib.json);
}
return null;
}

private async saveToIndexedDB() {
const data = this.sql.export();
const buffer = new Uint8Array(data);
await this.db.put('sql-state', { id: 'main', data: buffer });
}

private async loadFromIndexedDB(): Promise<any | null> {
const saved = await this.db.get('sql-state', 'main');
if (saved) {
const SQL = await initSqlJs({ locateFile: file => `/wasm/${file}` });
return new SQL.Database(saved.data);
}
return null;
}
}

9. Testing Strategy

9.1 Unit Tests

#[cfg(test)]
mod tests {
use super::*;

#[tokio::test]
async fn test_library_loading() {
let loader = DynamicLoader::new_test();

// First load should fetch from server
let start = Instant::now();
let lib = loader.load_command("agent.list").await.unwrap();
let first_load = start.elapsed();

assert!(first_load.as_millis() < 300);
assert_eq!(lib.name, "agent-ops");

// Second load should use cache
let start = Instant::now();
let lib2 = loader.load_command("agent.list").await.unwrap();
let cached_load = start.elapsed();

assert!(cached_load.as_millis() < 50);
assert_eq!(lib.version, lib2.version);
}

#[tokio::test]
async fn test_multi_environment_storage() {
// Test Cloud Run mode
std::env::set_var("K_SERVICE", "test");
let cache = SqliteCache::init(RuntimeMode::detect()).await.unwrap();
assert!(cache.gcs_mount.is_some());

// Test local mode
std::env::remove_var("K_SERVICE");
let cache = SqliteCache::init(RuntimeMode::detect()).await.unwrap();
assert!(cache.gcs_mount.is_none());
}
}

9.2 Integration Tests

#[tokio::test]
async fn test_full_command_execution() {
let codi = CODIClient::new().await.unwrap();

// Execute dynamic command
let output = codi.execute(&["agent", "list", "--format", "json"]).await.unwrap();

let agents: Vec<Agent> = serde_json::from_str(&output).unwrap();
assert!(!agents.is_empty());
}

#[tokio::test]
async fn test_offline_capability() {
let codi = CODIClient::new().await.unwrap();

// Load library while online
codi.execute(&["agent", "list"]).await.unwrap();

// Disconnect from network
codi.set_offline_mode(true);

// Should still work from cache
let output = codi.execute(&["agent", "list"]).await.unwrap();
assert!(!output.is_empty());
}

10. Security Considerations

Security Architecture

Security Flow Sequence

11. Performance Characteristics

11.1 Performance Metrics Visualization

11.2 Load Time Breakdown

11.3 Optimization Strategies

12. Operational Considerations

12.1 Monitoring

metrics:
- name: codi_library_load_duration_ms
type: histogram
labels: [library_name, cache_hit]

- name: codi_active_libraries
type: gauge

- name: codi_cache_size_bytes
type: gauge
labels: [storage_type]

- name: codi_command_executions_total
type: counter
labels: [command, status]

12.2 Deployment

  • Zero-downtime library updates
  • Gradual rollout with version pinning
  • Automatic cache invalidation
  • Health checks for library server

13. Migration Strategy

13.1 Migration Timeline

13.2 Migration Flow

14. Consequences

14.1 Impact Analysis

14.2 Cost-Benefit Visualization

14.3 Risk Matrix

15. References & Standards

16. Appendix

A. Example Prompt Engineering Library Commands

# Generate optimized prompt for code review
codi prompt generate "Review Rust code for security vulnerabilities" \
--model claude \
--style analytical \
--max-tokens 1500

# Test prompt across multiple models
codi prompt test security-review.prompt \
--models claude,gpt4,gemini \
--iterations 5 \
--compare

# Import community prompt library
codi prompt library import https://github.com/coditect/security-prompts

# List available prompt templates
codi prompt library list --filter security

B. Command Development Guide

Creating a new command library:

# prompt-engineering-library.yaml
library:
name: prompt-engineering
version: 1.0.0
description: AI prompt engineering toolkit

commands:
prompt:
generate:
implementation: |
async function generatePrompt({ task, model, style, maxTokens }) {
const template = await selectTemplate(model, style);
const optimized = await optimizeForTask(template, task);
return formatPrompt(optimized, maxTokens);
}

17. Review & Approval

ReviewerRoleStatusDateComments
[Pending]Platform Architect---
[Pending]Security Lead---
[Pending]DevOps Lead---
[Pending]AI Team Lead---

Document Status: DRAFT - Awaiting Review
Next Review Date: 2025-09-05
Owner: Platform Architecture Team