Skip to main content

FoundationDB Multi-Process Architecture Analysis

Date: 2025-10-06 Context: LM Studio Multiple llm IDE with multi-user/multi-tenant requirements Decision: Evaluate monolithic vs microservices architecture for FoundationDB integration


1. Current State Analysis

1.1 Existing Components

file-monitor (t2 project)

  • ✅ Production-ready file monitoring
  • ✅ Dual-log output (trace + events)
  • ✅ WSL2 compatible (polling mode)
  • ❌ No FoundationDB integration
  • ❌ No multi-tenant support
  • ❌ Single-user only

CODI2 (coditect-v4 submodule)

  • ✅ FoundationDB integration
  • ✅ Multi-tenant architecture
  • ✅ Session management
  • ✅ AI attribution (20+ tools)
  • ✅ Agent coordination
  • ⏳ Dashboard incomplete
  • ⏳ 70% feature parity

1.2 Requirements

Multi-User Multi-Tenant System:

  1. FoundationDB for centralized log storage
  2. Multi-tenant isolation (tenant_id + user_id)
  3. Real-time dashboard
  4. Session management
  5. Agent coordination
  6. File monitoring integration

2. Architecture Options

Option A: Monolithic Binary

Single coditect-server binary with all features:

coditect-server (single Rust binary)
├── File Monitor Module
├── Session Manager Module
├── Dashboard Web Server (Axum)
├── FoundationDB Client
├── Agent Coordinator
└── API Gateway

Pros:

  • ✅ Simpler deployment (one binary)
  • ✅ Easier inter-module communication (in-process)
  • ✅ Lower memory footprint
  • ✅ Simpler configuration
  • ✅ Better performance (no IPC overhead)

Cons:

  • ❌ Harder to scale individual components
  • ❌ Single point of failure
  • ❌ Tight coupling between modules
  • ❌ Harder to update one component independently
  • ❌ Resource contention (CPU/memory)

Option B: Microservices Architecture

Multiple specialized binaries:

System Architecture
├── coditect-monitor (file monitoring)
│ └── Port: N/A (writes to FDB)

├── coditect-session (session management)
│ └── Port: 8765 (gRPC/HTTP)

├── coditect-dashboard (web UI)
│ └── Port: 3000 (HTTP)

├── coditect-api (REST/WebSocket API)
│ └── Port: 8080 (HTTP/WS)

└── FoundationDB Cluster
└── Port: 4500

Pros:

  • ✅ Independent scaling (scale dashboard separately from monitor)
  • ✅ Fault isolation (dashboard crash doesn't affect monitoring)
  • ✅ Independent deployment (update dashboard without restarting monitor)
  • ✅ Technology flexibility (dashboard could be Node.js if needed)
  • ✅ Clear separation of concerns

Cons:

  • ❌ More complex deployment
  • ❌ Inter-process communication overhead
  • ❌ Higher memory usage (multiple processes)
  • ❌ More configuration needed
  • ❌ Network latency between services

Core monolith + separate dashboard:

System Architecture
├── coditect-core (monolithic core)
│ ├── File Monitor
│ ├── Session Manager
│ ├── Agent Coordinator
│ ├── FoundationDB Client
│ └── Internal API (gRPC)
│ └── Port: 8765 (gRPC for dashboard)

├── coditect-dashboard (separate binary)
│ ├── React Frontend (static assets)
│ ├── Axum Web Server
│ ├── WebSocket Server
│ └── Dashboard API
│ └── Port: 3000 (HTTP/WS)
│ └── Connects to: coditect-core:8765

└── FoundationDB Cluster
└── Port: 4500

Why This Works Best:

  1. Separation of Concerns:

    • Core monitoring/session logic stays together (tight coupling beneficial)
    • Dashboard UI isolated (can crash/restart without affecting core)
  2. Independent Lifecycle:

    • Dashboard can be updated frequently (UI changes)
    • Core binary updated less frequently (stability critical)
  3. Resource Management:

    • Core process: Low memory, always running
    • Dashboard: Higher memory (web assets), can restart
  4. Scalability:

    • Core: One instance per node (monitors local files)
    • Dashboard: Multiple instances behind load balancer

3.1 coditect-core Binary

Location: src/coditect-core/

Responsibilities:

  • ✅ File system monitoring (integrate file-monitor code)
  • ✅ Session management (CODI2 session module)
  • ✅ FoundationDB client (audit logging)
  • ✅ Agent coordination (heartbeat, cohort)
  • ✅ Internal gRPC API for dashboard

cargo.toml:

[package]
name = "coditect-core"
version = "1.0.0"

[[bin]]
name = "coditect-core"
path = "src/main.rs"

[dependencies]
tokio = { version = "1.0", features = ["full"] }
foundationdb = { version = "0.9", features = ["fdb-7_1"] }
notify = "6.1"
tonic = "0.12" # gRPC server
serde = { version = "1.0", features = ["derive"] }
# ... (same as file-monitor + CODI2 session deps)

Main Process:

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Initialize FoundationDB network
fdb_network::init_network()?;

// Start file monitor
let monitor = FileMonitor::new(config.watch_paths)?;
tokio::spawn(monitor.run());

// Start session manager
let session_mgr = SessionManager::new(fdb_client.clone())?;
tokio::spawn(session_mgr.run());

// Start agent coordinator
let coordinator = AgentCoordinator::new(fdb_client.clone())?;
tokio::spawn(coordinator.run());

// Start internal gRPC API
let grpc_server = GrpcServer::new(fdb_client, session_mgr.handle());
grpc_server.serve("0.0.0.0:8765").await?;

Ok(())
}

3.2 coditect-dashboard Binary

Location: src/coditect-dashboard/

Responsibilities:

  • ✅ Web server (Axum)
  • ✅ React UI (static assets)
  • ✅ WebSocket server (real-time updates)
  • ✅ Dashboard API (queries coditect-core via gRPC)
  • ✅ Log viewer, session viewer, agent status

cargo.toml:

[package]
name = "coditect-dashboard"
version = "1.0.0"

[[bin]]
name = "coditect-dashboard"
path = "src/main.rs"

[dependencies]
tokio = { version = "1.0", features = ["full"] }
axum = { version = "0.7", features = ["ws"] }
tower-http = { version = "0.5", features = ["fs", "cors"] }
tonic = "0.12" # gRPC client to coditect-core
serde = { version = "1.0", features = ["derive"] }

Main Process:

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Connect to coditect-core gRPC API
let core_client = CoreClient::connect("http://localhost:8765").await?;

// Build router
let app = Router::new()
.route("/", get(serve_index))
.route("/api/logs", get(get_logs))
.route("/api/sessions", get(get_sessions))
.route("/ws", get(websocket_handler))
.nest_service("/static", ServeDir::new("static"))
.layer(CorsLayer::permissive())
.with_state(AppState { core_client });

// Serve dashboard
let listener = TcpListener::bind("0.0.0.0:3000").await?;
axum::serve(listener, app).await?;

Ok(())
}

4. FoundationDB Schema for Multi-Tenant

4.1 Keyspace Design

Keyspace layout:
/coditect/
├── tenants/{tenant_id}/
│ ├── users/{user_id}/
│ │ ├── profile
│ │ ├── sessions/{session_id}/
│ │ │ ├── metadata
│ │ │ ├── events/{timestamp}/
│ │ │ └── files/
│ │ └── workspaces/{workspace_id}/
│ │
│ ├── agents/{agent_id}/
│ │ ├── heartbeat
│ │ ├── state
│ │ └── cohort_id
│ │
│ └── audit_log/{timestamp}/

├── sessions/{session_id}/
│ └── (global session index)

└── agents/{agent_id}/
└── (global agent index)

4.2 Rust Implementation

// Tenant isolation
pub struct TenantKey {
tenant_id: String,
user_id: String,
}

impl TenantKey {
pub fn audit_log_key(&self, timestamp: DateTime<Utc>) -> Vec<u8> {
format!(
"/coditect/tenants/{}/users/{}/audit_log/{}",
self.tenant_id,
self.user_id,
timestamp.to_rfc3339()
).into_bytes()
}

pub fn session_key(&self, session_id: &str) -> Vec<u8> {
format!(
"/coditect/tenants/{}/users/{}/sessions/{}",
self.tenant_id,
self.user_id,
session_id
).into_bytes()
}
}

// Usage
let tenant = TenantKey {
tenant_id: "tenant-001".to_string(),
user_id: "user-123".to_string(),
};

let key = tenant.audit_log_key(Utc::now());
tx.set(&key, &event_data)?;

5. Deployment Architecture

5.1 Single Node Deployment (MVP)

Single Server
├── coditect-core (systemd service)
│ └── Listening on: localhost:8765 (gRPC)

├── coditect-dashboard (systemd service)
│ └── Listening on: 0.0.0.0:3000 (HTTP)

└── FoundationDB (systemd service)
└── Listening on: localhost:4500

systemd Services:

/etc/systemd/system/coditect-core.service:

[Unit]
Description=CODITECT Core Service
After=network.target foundationdb.service
Requires=foundationdb.service

[Service]
Type=simple
User=coditect
WorkingDirectory=/opt/coditect
ExecStart=/opt/coditect/bin/coditect-core
Restart=always
RestartSec=10

Environment="FDB_CLUSTER_FILE=/etc/foundationdb/fdb.cluster"
Environment="RUST_LOG=info"

[Install]
WantedBy=multi-user.target

/etc/systemd/system/coditect-dashboard.service:

[Unit]
Description=CODITECT Dashboard
After=network.target coditect-core.service
Requires=coditect-core.service

[Service]
Type=simple
User=coditect
WorkingDirectory=/opt/coditect/dashboard
ExecStart=/opt/coditect/bin/coditect-dashboard
Restart=always
RestartSec=10

Environment="CORE_API_URL=http://localhost:8765"
Environment="RUST_LOG=info"

[Install]
WantedBy=multi-user.target

5.2 Production Deployment (Kubernetes)

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: coditect-core
spec:
selector:
matchLabels:
app: coditect-core
template:
metadata:
labels:
app: coditect-core
spec:
containers:
- name: core
image: gcr.io/project/coditect-core:latest
ports:
- containerPort: 8765
name: grpc
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumeMounts:
- name: fdb-config
mountPath: /etc/foundationdb
- name: workspace
mountPath: /workspace
volumes:
- name: fdb-config
configMap:
name: fdb-cluster-config
- name: workspace
hostPath:
path: /var/coditect/workspaces

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coditect-dashboard
spec:
replicas: 3 # Scalable
selector:
matchLabels:
app: coditect-dashboard
template:
metadata:
labels:
app: coditect-dashboard
spec:
containers:
- name: dashboard
image: gcr.io/project/coditect-dashboard:latest
ports:
- containerPort: 3000
name: http
env:
- name: CORE_API_URL
value: http://coditect-core:8765

---
apiVersion: v1
kind: Service
metadata:
name: coditect-dashboard
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: coditect-dashboard

6. Migration Path

Phase 1: Integrate FoundationDB into file-monitor

Goal: Add FDB storage without changing file-monitor architecture

// src/file-monitor/src/fdb_logger.rs
pub struct FdbLogger {
db: Database,
tenant_id: String,
}

impl FdbLogger {
pub async fn log_event(&self, event: &AuditEvent) -> Result<()> {
let tx = self.db.create_trx()?;

let key = format!(
"/coditect/tenants/{}/audit_log/{}",
self.tenant_id,
event.timestamp_utc
);

let value = serde_json::to_vec(event)?;
tx.set(key.as_bytes(), &value)?;
tx.commit().await?;

Ok(())
}
}

// Add to monitor.rs
let fdb_logger = FdbLogger::new(db, tenant_id)?;

// Log to both file and FDB
display_event(&event, &cli.format, &mut events_writer)?;
fdb_logger.log_event(&event).await?;

Phase 2: Extract Core Binary

Goal: Create coditect-core with file-monitor + session management

src/
├── file-monitor/ # Keep existing
└── coditect-core/ # New
├── main.rs # Core server
├── file_monitor.rs # Wrapper around file-monitor
├── session_manager.rs # From CODI2
└── grpc_api.rs # Internal API

Phase 3: Add Dashboard Binary

Goal: Create coditect-dashboard as separate process

src/
├── coditect-core/
└── coditect-dashboard/ # New
├── main.rs # Dashboard server
├── api.rs # REST API
├── websocket.rs # Real-time updates
└── static/ # React build output

7. Recommendation: START WITH HYBRID

Immediate Actions (Week 1-2)

  1. Integrate FoundationDB into file-monitor:

    cd src/file-monitor
    cargo add foundationdb --features fdb-7_1
  2. Add FdbLogger alongside file logging:

    • Keeps existing file-based logs working
    • Adds FDB storage in parallel
    • No breaking changes
  3. Test with multi-tenant setup:

    • Create test tenants in FDB
    • Verify isolation
    • Measure performance

Near-term (Week 3-4)

  1. Create coditect-core binary:

    • Move file-monitor into core
    • Add session manager from CODI2
    • Add gRPC API
  2. Create coditect-dashboard binary:

    • Simple Axum server
    • Basic log viewer
    • WebSocket for real-time

Future (Month 2+)

  1. Evaluate microservices:
    • If dashboard needs independent scaling → separate it
    • If core becomes too large → split into monitor/session services
    • Let architecture evolve based on real needs

8. Code Reuse from CODI2

What to Take from CODI2

✅ Adopt Immediately:

  • src/session/ - Session management (complete)
  • src/ai_attribution.rs - AI tool detection
  • src/fdb_network.rs + src/fdb_utils.rs - FDB helpers
  • src/logging/enhanced_logger.rs - Advanced logging

✅ Adopt After Cleanup:

  • src/agent/heartbeat.rs - Agent coordination
  • src/websocket/ - WebSocket server (for dashboard)
  • src/mcp/ - MCP server (if needed)

❌ Skip (Not Needed Yet):

  • src/export/ - Export manager (incomplete)
  • src/web/ - Web dashboard (you'll build fresh)
  • src/metrics/ - Metrics (incomplete)

Integration Example

// src/coditect-core/main.rs
mod file_monitor; // Your existing file-monitor code
mod session; // From CODI2
mod fdb; // FDB integration

use crate::file_monitor::FileMonitor;
use crate::session::SessionManager;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Initialize FDB
let db = fdb::init_network()?;

// Start file monitor with FDB logging
let monitor = FileMonitor::new(db.clone(), tenant_id)?;
tokio::spawn(monitor.run());

// Start session manager
let sessions = SessionManager::new(db.clone())?;
tokio::spawn(sessions.run());

// Keep running
tokio::signal::ctrl_c().await?;
Ok(())
}

9. Summary & Decision

Two Binaries:

  1. coditect-core - Monolithic core (file monitor + sessions + FDB)
  2. coditect-dashboard - Separate UI/API server

Why:

  • ✅ Balances simplicity with flexibility
  • ✅ Dashboard can restart without affecting monitoring
  • ✅ Core remains stable and always-on
  • ✅ Easy to scale dashboard independently
  • ✅ Simpler than full microservices
  • ✅ More flexible than pure monolith

Migration Strategy

Phase 1 (Now): Add FDB to file-monitor, keep file logs Phase 2 (Week 3): Create coditect-core binary Phase 3 (Week 4): Create coditect-dashboard binary Phase 4 (Month 2): Evaluate if further splitting needed

Next Steps

  1. Add FoundationDB dependency to file-monitor
  2. Implement FdbLogger alongside existing file logging
  3. Test multi-tenant isolation
  4. Begin planning coditect-core binary structure
  5. Design dashboard gRPC API contract

This gives you FoundationDB multi-tenant support NOW while setting up for a clean, scalable architecture later.