CODITECT Technical Design Document (TDD)
Version: 3.0
Status: Current Implementation
Introduction​
CODITECT is an AI-powered software development platform that automates code generation through multi-agent orchestration. At its core, CODITECT takes software specifications and transforms them into working code by coordinating specialized AI agents, each handling different aspects of development.
What CODITECT Does​
Think of CODITECT as a virtual development team where:
- You provide: Requirements, specifications, and design decisions
- CODITECT delivers: Working code, tests, documentation, and deployment
The platform operates like a smart factory for software:
- Input: Project specifications and requirements
- Processing: AI agents collaborate to implement features
- Output: Production-ready code with full test coverage
System Overview Diagram​
Core Modules Breakdown​
1. API Gateway Module​
The front door to CODITECT. Built with Rust and Actix-web, it handles:
- User authentication (register/login)
- Request routing and validation
- WebSocket connections for real-time features
- Rate limiting and security
2. CODI2 Monitoring System​
The nervous system that tracks everything happening in the platform:
- Audit Logger: Records every action to FoundationDB
- File Monitor: Watches file system changes using FSEvents/inotify
- Message Bus: In-memory communication using Tokio channels
- State Store: Distributed state management in FoundationDB
3. workspace Manager​
Creates isolated development environments:
- Spins up Kubernetes StatefulSets on demand
- Manages persistent volumes for code storage
- Handles workspace suspension/resumption
- Provides terminal access and file operations
4. Orchestrator​
The brain that coordinates AI agents:
- Assigns tasks to specialized agents
- Manages agent lifecycles and health
- Coordinates multi-agent workflows
- Ensures no conflicts between parallel work
5. Data Layer (FoundationDB)​
Single source of truth for all data:
- User accounts and authentication
- Project and task information
- Audit logs and metrics
- Session state and agent coordination
- Cache (no Redis needed)
How Components Work Together​
Technical Architecture Details​
Technology Stack​
Language: Rust 1.70+
Framework: Actix-web 4.0
Database: FoundationDB 7.1 (no Redis/PostgreSQL)
Container: Docker 24.0
Orchestration: GKE Autopilot (Kubernetes 1.27)
Message Queue: Tokio MPSC channels (in-memory)
File Monitor: FSEvents (macOS) / inotify (Linux)
Cloud: Google Cloud Platform
Container Orchestration Model​
API Service Implementation​
Actix-web Server Configuration​
// src/api-v2/src/main.rs
#[actix_web::main]
async fn main() -> std::io::Result<()> {
// Initialize FoundationDB connection
let database = db::init_db().await?;
let app_state = AppState::new(database);
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(app_state.clone()))
.wrap(Logger::default())
.wrap(Cors::default().allow_any_origin())
.service(
web::scope("/api/v2")
.service(handlers::health_check)
.service(handlers::register)
.service(handlers::login)
)
.route("/ws", web::get().to(websocket_handler))
})
.bind(("0.0.0.0", 8080))?
.run()
.await
}
FoundationDB Integration​
// No Redis - all caching in FoundationDB
pub struct AppState {
pub db: Arc<Database>,
pub jwt_secret: String,
}
impl AppState {
pub async fn cache_get(&self, key: &str) -> Result<Option<Vec<u8>>> {
let cache_key = format!("cache/{}", key);
self.db.transact(|txn| async move {
Ok(txn.get(&cache_key, false).await?)
}).await
}
pub async fn cache_set(&self, key: &str, value: &[u8], ttl: Duration) -> Result<()> {
let cache_key = format!("cache/{}", key);
let expire_key = format!("cache_expiry/{}", key);
let expiry = SystemTime::now() + ttl;
self.db.transact(|txn| async move {
txn.set(&cache_key, value);
txn.set(&expire_key, &expiry.duration_since(UNIX_EPOCH)?.as_secs().to_be_bytes());
Ok(())
}).await
}
}
GKE workspace Architecture​
StatefulSet Lifecycle​
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ws-${USER_ID}-${WORKSPACE_ID}
namespace: workspaces
spec:
serviceName: workspace-service
replicas: 1
selector:
matchLabels:
app: workspace
user: ${USER_ID}
template:
spec:
nodeSelector:
cloud.google.com/gke-spot: "true" # 84% cost savings
containers:
- name: workspace
image: gcr.io/serene-voltage-464305-n2/workspace:latest
resources:
requests:
memory: "4Gi"
cpu: "2"
limits:
memory: "16Gi"
cpu: "8"
volumeMounts:
- name: workspace-data
mountPath: /workspace
- name: fdb-config
mountPath: /etc/foundationdb
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumes:
- name: fdb-config
configMap:
name: fdb-cluster-config
volumeClaimTemplates:
- metadata:
name: workspace-data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "premium-rwo"
resources:
requests:
storage: 10Gi
workspace Manager Implementation​
use k8s_openapi::api::apps::v1::StatefulSet;
use kube::{Api, Client};
pub struct workspaceManager {
k8s: Client,
fdb: Arc<Database>,
}
impl workspaceManager {
pub async fn ensure_workspace(&self, user_id: &str, workspace_id: &str) -> Result<workspacePod> {
let name = format!("ws-{}-{}", user_id, workspace_id);
let api: Api<StatefulSet> = Api::namespaced(self.k8s.clone(), "workspaces");
// Check if StatefulSet exists
match api.get(&name).await {
Ok(sts) => {
// Scale up if suspended (replicas = 0)
if sts.spec.unwrap().replicas.unwrap_or(0) == 0 {
let patch = json!({"spec": {"replicas": 1}});
api.patch(&name, &PatchParams::default(), &Patch::Merge(patch)).await?;
self.wait_for_pod_ready(&name).await?;
}
Ok(self.get_workspace_info(&name).await?)
}
Err(_) => {
// Create new workspace
self.create_statefulset(user_id, workspace_id).await?;
self.wait_for_pod_ready(&name).await?;
self.initialize_workspace(&name).await?;
Ok(self.get_workspace_info(&name).await?)
}
}
}
}
CODI2 Technical Architecture​
CODI2 (CODItect Intelligence v2) is the primary interface layer between users (human developers and AI agents) and the CODITECT platform. It provides a unified, race-free system for all platform interactions.
System Design Philosophy​
CODI2 was designed to solve fundamental problems in distributed development environments:
- Race Conditions: Multiple agents writing to the same log file
- Data Loss: File system buffers not flushing during crashes
- Coordination Failures: Agents stepping on each other's work
- Audit Compliance: No reliable record of who did what when
CODI2 Architecture​
Core Components Without External Dependencies​
// Everything in FoundationDB - no Redis needed
pub struct Codi2System {
audit_logger: AuditLogger,
file_monitor: FileMonitor,
message_bus: MessageBus,
state_store: StateStore,
}
pub struct AuditLogger {
fdb: Arc<Database>,
buffer: Arc<Mutex<Vec<AuditEvent>>>,
flush_task: JoinHandle<()>,
}
impl AuditLogger {
pub fn new(fdb: Arc<Database>) -> Self {
let buffer = Arc::new(Mutex::new(Vec::new()));
let buffer_clone = buffer.clone();
let fdb_clone = fdb.clone();
// Background flush task
let flush_task = tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_millis(100));
loop {
interval.tick().await;
if let Ok(mut buf) = buffer_clone.lock() {
if !buf.is_empty() {
let events = std::mem::take(&mut *buf);
let _ = Self::flush_to_fdb(&fdb_clone, events).await;
}
}
}
});
Self { fdb, buffer, flush_task }
}
}
Performance Specifications​
FoundationDB Performance​
Cluster Configuration:
Nodes: 6 (2 per zone)
Storage: 600GB SSD total
Replication: Triple
Benchmarks:
Single Key Read: 5ms p50, 10ms p99
Range Scan (1000 keys): 20ms p50, 50ms p99
Write Transaction: 10ms p50, 30ms p99
Batch Write (100 keys): 50ms p50, 100ms p99
Security Implementation​
Pod Security​
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted-workspace
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'persistentVolumeClaim'
- 'secret'
readOnlyRootFilesystem: true
Conclusion​
CODITECT brings together Rust's performance, FoundationDB's reliability, and Kubernetes' scalability to create a platform where AI agents can collaborate on software development. Each module serves a specific purpose but works seamlessly with others through well-defined interfaces. The result is a system that can take specifications and produce working software with minimal human intervention.