V5 Migration Plan - Reusing Serena Infrastructure
Date: 2025-10-06 GCP Project: serene-voltage-464305-n2 (Google-GCP-CLI) Project Number: 1059494892139 Domain: coditect.ai (Static IP: 34.8.51.57)
🎯 Executive Summary
Strategy: Leverage existing V4 infrastructure in serene-voltage-464305-n2 for V5 deployment. Reuse GKE cluster, FoundationDB, domain, and CI/CD pipelines to accelerate time-to-market.
Key Decision: Keep coditect.ai domain and migrate V4→V5 in-place with zero-downtime deployment.
📊 Existing Infrastructure Audit
✅ GKE Cluster (PRODUCTION READY)
Cluster Details:
Name: codi-poc-e2-cluster
Zone: us-central1-a
Namespace: coditect-app
Node Pool: 3-5 nodes (auto-scaling)
Network: default VPC
Load Balancer IP: 34.46.212.40
Static IP (coditect.ai): 34.8.51.57
Current Deployments:
coditect-app namespace:
├── coditect-frontend (Deployment, 2-3 replicas, HPA enabled)
├── coditect-api-v2 (Deployment, 2-5 replicas, HPA enabled)
├── foundationdb (StatefulSet, 3 replicas, 50Gi PV each)
├── fdb-proxy (Deployment, 2 replicas, HAProxy for FDB)
├── coditect-production-ingress (Ingress with SSL)
└── coditect-ai-ssl (ManagedCertificate - Active)
Services:
coditect-frontend(ClusterIP)api-loadbalancer(LoadBalancer at 34.46.212.40)fdb-cluster(Headless service for StatefulSet)fdb-proxy-service(ILB at 10.128.0.8)
Ingress Configuration:
- Managed SSL: Google-managed certificate (auto-renewal)
- Domains: coditect.ai, api.coditect.ai, www.coditect.ai
- Backend Services: Frontend (React), API (Rust/Axum)
- Health Checks: Configured and passing
✅ FoundationDB Cluster (PRODUCTION READY)
Deployment Architecture:
StatefulSet: foundationdb
Replicas: 3 (double redundancy mode)
Storage: 3 × 50GB PersistentVolumes (150GB total)
Memory: 3 × 2-4GB RAM per node
Redundancy: double (survives 1 node failure)
Connection Details:
# Cluster String (internal to GKE)
coditect:production@10.128.0.8:4500
# HAProxy Routes to StatefulSet Pods:
foundationdb-0.fdb-cluster.coditect-app.svc.cluster.local:4500
foundationdb-1.fdb-cluster.coditect-app.svc.cluster.local:4500
foundationdb-2.fdb-cluster.coditect-app.svc.cluster.local:4500
Data Models (Already Implemented):
- User, Tenant, License, workspace (MVP models)
- Session management schema
- Audit logging patterns
- Multi-tenant key structure:
tenant_id/namespace/identifier
Scaling Options (Zero Downtime):
# Scale to 5 nodes for high availability
kubectl scale statefulset foundationdb --replicas=5 -n coditect-app
# Upgrade to triple redundancy (survives 2 failures)
kubectl exec -n coditect-app foundationdb-0 -- fdbcli --exec "configure triple"
✅ Container Registry (READY)
Location: gcr.io/serene-voltage-464305-n2/
Existing Images:
gcr.io/serene-voltage-464305-n2/coditect-frontend:latest
gcr.io/serene-voltage-464305-n2/coditect-api-v2:latest
gcr.io/serene-voltage-464305-n2/coditect-websocket:latest (planned)
gcr.io/serene-voltage-464305-n2/coditect-user-pod:latest (planned)
gcr.io/serene-voltage-464305-n2/codi2:latest (Cloud Build ready)
Storage Bucket (Build Artifacts):
gs://serene-voltage-464305-n2-builds/
├── codi2/codi2-[SHA] # CODI2 binaries
├── frontend/ # Frontend builds
└── api-v2/ # API builds
✅ Domain & SSL (ACTIVE)
DNS Configuration:
coditect.ai → 34.8.51.57 (A record)
api.coditect.ai → 34.8.51.57 (A record)
www.coditect.ai → 34.8.51.57 (A record)
SSL Certificate:
- Type: Google-managed (automatic renewal)
- Status: Active on all 3 domains
- Issuer: Google Trust Services
- Auto-renewal: Yes
Current Production URLs:
- https://coditect.ai (Frontend)
- https://api.coditect.ai (API endpoints)
- https://www.coditect.ai (Redirect to main)
✅ CI/CD Pipelines (FUNCTIONAL)
Cloud Build Triggers (Already Configured):
-
Frontend Build (
src/frontend/cloudbuild.yaml):steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/coditect-frontend:$SHORT_SHA', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/coditect-frontend:$SHORT_SHA']
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'coditect-frontend'
- '--image=gcr.io/$PROJECT_ID/coditect-frontend:$SHORT_SHA'
- '--region=us-central1' -
API Build (
src/api-v2/cloudbuild.yaml):steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/coditect-api-v2:$SHORT_SHA', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/coditect-api-v2:$SHORT_SHA'] -
CODI2 Build (
codi2/cloudbuild.yaml):steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/codi2:$SHORT_SHA', '.']
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', '/workspace/target/release/codi2', 'gs://$PROJECT_ID-builds/codi2/codi2-$SHORT_SHA']
Deployment Commands:
# Set project
gcloud config set project serene-voltage-464305-n2
# Build & deploy frontend
cd src/frontend
gcloud builds submit --config=cloudbuild.yaml
# Build & deploy API
cd src/api-v2
gcloud builds submit --config=cloudbuild.yaml
# Build CODI2
cd codi2
gcloud builds submit --config=cloudbuild.yaml
🚀 V5 Migration Strategy
Phase 1: theia IDE Integration (Week 1-2)
Goal: Deploy theia IDE to GKE, integrate with existing FDB
Tasks:
-
Build theia Container:
# Create Dockerfile for theia in t2 project
cd /workspace/PROJECTS/t2
cat > Dockerfile.theia << 'EOF'
FROM node:20-slim
# Install dependencies
RUN apt-get update && apt-get install -y \
git curl wget python3 build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy theia application
WORKDIR /workspace
COPY package*.json ./
RUN npm ci --production
# Build theia
COPY . .
RUN npm run theia:build
EXPOSE 3000
CMD ["npm", "run", "theia:start", "--", "--hostname=0.0.0.0", "--port=3000"]
EOF -
Create Cloud Build Config:
# cloudbuild-theia.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-f'
- 'Dockerfile.theia'
- '-t'
- 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:${SHORT_SHA}'
- '.'
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:${SHORT_SHA}']
- name: 'gcr.io/cloud-builders/docker'
args:
- 'tag'
- 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:${SHORT_SHA}'
- 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:latest'
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:latest']
images:
- 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:${SHORT_SHA}'
- 'gcr.io/serene-voltage-464305-n2/t2-theia-ide:latest' -
Deploy to GKE:
# k8s/theia-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: t2-theia-ide
namespace: coditect-app
spec:
replicas: 2
selector:
matchLabels:
app: t2-theia-ide
template:
metadata:
labels:
app: t2-theia-ide
spec:
containers:
- name: theia
image: gcr.io/serene-voltage-464305-n2/t2-theia-ide:latest
ports:
- containerPort: 3000
name: http
env:
- name: FDB_CLUSTER_FILE
value: "/etc/foundationdb/fdb.cluster"
- name: FDB_CLUSTER_STRING
value: "coditect:production@10.128.0.8:4500"
volumeMounts:
- name: fdb-cluster-config
mountPath: /etc/foundationdb
readOnly: true
volumes:
- name: fdb-cluster-config
configMap:
name: fdb-cluster-config
---
apiVersion: v1
kind: Service
metadata:
name: t2-theia-ide
namespace: coditect-app
spec:
selector:
app: t2-theia-ide
ports:
- port: 3000
targetPort: 3000 -
Update Ingress (Add IDE route):
# Add to existing coditect-production-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: coditect-production-ingress
namespace: coditect-app
spec:
rules:
- host: coditect.ai
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: coditect-frontend # Existing frontend
port:
number: 80
- path: /ide
pathType: Prefix
backend:
service:
name: t2-theia-ide # NEW: theia IDE
port:
number: 3000
- host: api.coditect.ai
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-loadbalancer
port:
number: 8080
Deliverables:
- theia container built and pushed to GCR
- theia deployed to GKE (2 replicas)
- FDB connection working from theia
- IDE accessible at https://coditect.ai/ide
Phase 2: CODI2 Binary Integration (Week 2-3)
Goal: Compile CODI2, deploy to user pods, integrate with FDB
Tasks:
-
Build CODI2 Locally (Already have cloudbuild.yaml):
# Use existing Cloud Build config from v4
cd coditect-v4/codi2
gcloud config set project serene-voltage-464305-n2
gcloud builds submit --config=cloudbuild.yaml
# Download binary from Cloud Storage
mkdir -p /workspace/PROJECTS/t2/binaries
gsutil cp gs://serene-voltage-464305-n2-builds/codi2/codi2-* /workspace/PROJECTS/t2/binaries/
chmod +x /workspace/PROJECTS/t2/binaries/codi2-*
# Test locally with t2 FDB
export FDB_CLUSTER_FILE=/workspace/PROJECTS/t2/docs/v4-config/fdb.cluster
./binaries/codi2-* test -
Create CODI2 ConfigMap:
# k8s/codi2-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: codi2-config
namespace: coditect-app
data:
codi2.toml: |
[server]
bind_addr = "0.0.0.0:8765"
[foundationdb]
cluster_file = "/etc/foundationdb/fdb.cluster"
cluster_string = "coditect:production@10.128.0.8:4500"
[monitoring]
watch_paths = ["/workspace"]
ignore_patterns = [".git", "node_modules", "target", "dist"]
[logging]
level = "info"
format = "json" -
Update workspace Pod Template (from v4):
# k8s/workspace-pod-template.yaml
apiVersion: v1
kind: Pod
metadata:
name: workspace
namespace: user-${USER_ID}
spec:
containers:
- name: workspace
image: gcr.io/serene-voltage-464305-n2/t2-workspace-pod:latest
env:
- name: USER_ID
value: "${USER_ID}"
- name: TENANT_ID
value: "${TENANT_ID}"
- name: SESSION_ID
value: "${SESSION_ID}"
- name: FDB_CLUSTER_FILE
value: "/etc/foundationdb/fdb.cluster"
volumeMounts:
- name: workspace
mountPath: /workspace
- name: codi2-config
mountPath: /etc/codi2
- name: fdb-cluster-config
mountPath: /etc/foundationdb
command: ["/usr/local/bin/codi2"]
args:
- "--config"
- "/etc/codi2/codi2.toml"
- "--user-id"
- "${USER_ID}"
- "--tenant-id"
- "${TENANT_ID}"
volumes:
- name: workspace
persistentVolumeClaim:
claimName: user-workspace-${USER_ID}
- name: codi2-config
configMap:
name: codi2-config
- name: fdb-cluster-config
configMap:
name: fdb-cluster-config
Deliverables:
- CODI2 binary compiled via Cloud Build
- CODI2 tested with GKE FDB cluster
- workspace pod template with CODI2 integration
- File monitoring events flowing to FDB
Phase 3: Authentication Migration (Week 3-4)
Goal: Port V4 auth to theia, reuse existing JWT/Argon2 implementation
Tasks:
-
Extract V4 Auth Code:
# Copy v4 auth handlers to t2
cp -r /workspace/PROJECTS/t2/src/v4-api-v2/src/handlers/auth \
/workspace/PROJECTS/t2/src/services/auth
# Copy v4 models
cp -r /workspace/PROJECTS/t2/src/v4-api-v2/src/models/user \
/workspace/PROJECTS/t2/src/types/auth -
Create theia Auth Service:
// src/services/auth-service.ts
import { injectable, inject } from '@theia/core/shared/inversify';
import { FDBService } from './fdb-service';
import * as argon2 from 'argon2';
import * as jwt from 'jsonwebtoken';
export interface User {
userId: string;
tenantId: string;
email: string;
passwordHash: string;
createdAt: string;
}
@injectable()
export class AuthService {
constructor(
@inject(FDBService) private fdb: FDBService
) {}
async register(email: string, password: string): Promise<User> {
const userId = `usr-${Date.now()}`;
const tenantId = `tenant-${Date.now()}`;
const passwordHash = await argon2.hash(password);
const user: User = {
userId,
tenantId,
email,
passwordHash,
createdAt: new Date().toISOString()
};
// Store in FDB (reuse v4 key pattern)
await this.fdb.set(`${tenantId}/user/${userId}`, user);
return user;
}
async login(email: string, password: string): Promise<string> {
// Query FDB for user by email (scan all tenants)
const users = await this.fdb.scan('*/user/*');
const user = users.find(u => u.email === email);
if (!user) {
throw new Error('User not found');
}
const valid = await argon2.verify(user.passwordHash, password);
if (!valid) {
throw new Error('Invalid password');
}
// Generate JWT (reuse v4 pattern)
const token = jwt.sign(
{ userId: user.userId, tenantId: user.tenantId },
process.env.JWT_SECRET || 'dev-secret',
{ expiresIn: '7d' }
);
return token;
}
async validateToken(token: string): Promise<User> {
const decoded = jwt.verify(token, process.env.JWT_SECRET || 'dev-secret') as any;
const user = await this.fdb.get(`${decoded.tenantId}/user/${decoded.userId}`);
return user;
}
} -
Create Auth UI Widget:
// src/browser/auth-widget/login-widget.tsx
import * as React from 'react';
import { ReactWidget } from '@theia/core/lib/browser/widgets/react-widget';
import { injectable, inject } from '@theia/core/shared/inversify';
import { AuthService } from '../../services/auth-service';
@injectable()
export class LoginWidget extends ReactWidget {
constructor(
@inject(AuthService) private authService: AuthService
) {
super();
this.id = 'login-widget';
this.title.label = 'Login';
this.title.closable = true;
}
protected render(): React.ReactNode {
return (
<div className="auth-container">
<h2>Login to CODITECT</h2>
<form onSubmit={this.handleLogin}>
<input type="email" placeholder="Email" name="email" required />
<input type="password" placeholder="Password" name="password" required />
<button type="submit">Login</button>
</form>
<a href="#" onClick={this.showRegister}>Register</a>
</div>
);
}
private handleLogin = async (e: React.FormEvent) => {
e.preventDefault();
const form = e.target as HTMLFormElement;
const email = (form.elements.namedItem('email') as HTMLInputElement).value;
const password = (form.elements.namedItem('password') as HTMLInputElement).value;
try {
const token = await this.authService.login(email, password);
localStorage.setItem('jwt', token);
// Trigger pod provisioning here
} catch (err) {
console.error('Login failed:', err);
}
};
private showRegister = () => {
// Show register widget
};
}
Deliverables:
- Auth service ported to theia
- Login/Register widgets created
- JWT validation working
- User data stored in FDB with v4 key pattern
Phase 4: Pod Auto-Provisioning (Week 4-5)
Goal: Auto-create GKE user pods on registration (reuse v4 Kubernetes logic)
Tasks:
-
Extract V4 Pod Provisioning Code:
# Copy Kubernetes client code from v4 API
cp /workspace/PROJECTS/t2/src/v4-api-v2/src/handlers/provisioning.rs \
/workspace/PROJECTS/t2/src/services/pod-provisioning-service.ts -
Create Pod Provisioning Service (TypeScript port):
// src/services/pod-provisioning-service.ts
import { injectable } from '@theia/core/shared/inversify';
import * as k8s from '@kubernetes/client-node';
export interface PodInfo {
namespace: string;
podName: string;
status: string;
pvcSize: string;
}
@injectable()
export class PodProvisioningService {
private kc = new k8s.KubeConfig();
private k8sApi: k8s.CoreV1Api;
constructor() {
this.kc.loadFromDefault();
this.k8sApi = this.kc.makeApiClient(k8s.CoreV1Api);
}
async createUserPod(userId: string, tenantId: string): Promise<PodInfo> {
const namespace = `user-${userId}`;
// 1. Create namespace
await this.createNamespace(namespace);
// 2. Create PVC
await this.createPVC(namespace, userId, '50Gi');
// 3. Deploy pod (from template)
await this.deployPod(namespace, userId, tenantId);
return {
namespace,
podName: 'workspace',
status: 'Running',
pvcSize: '50Gi'
};
}
private async createNamespace(name: string): Promise<void> {
const namespace = {
metadata: {
name,
labels: { 'coditect.ai/user-workspace': 'true' }
}
};
try {
await this.k8sApi.createNamespace(namespace);
} catch (err: any) {
if (err.statusCode !== 409) { // Already exists
throw err;
}
}
}
private async createPVC(namespace: string, userId: string, size: string): Promise<void> {
const pvc = {
metadata: {
name: `user-workspace-${userId}`,
namespace
},
spec: {
accessModes: ['ReadWriteOnce'],
resources: {
requests: { storage: size }
}
}
};
try {
await this.k8sApi.createNamespacedPersistentVolumeClaim(namespace, pvc);
} catch (err: any) {
if (err.statusCode !== 409) {
throw err;
}
}
}
private async deployPod(namespace: string, userId: string, tenantId: string): Promise<void> {
// Load pod template from k8s/workspace-pod-template.yaml
const pod = {
metadata: {
name: 'workspace',
namespace
},
spec: {
containers: [{
name: 'workspace',
image: 'gcr.io/serene-voltage-464305-n2/t2-workspace-pod:latest',
env: [
{ name: 'USER_ID', value: userId },
{ name: 'TENANT_ID', value: tenantId },
{ name: 'FDB_CLUSTER_FILE', value: '/etc/foundationdb/fdb.cluster' }
],
volumeMounts: [
{ name: 'workspace', mountPath: '/workspace' },
{ name: 'codi2-config', mountPath: '/etc/codi2' },
{ name: 'fdb-cluster-config', mountPath: '/etc/foundationdb' }
]
}],
volumes: [
{ name: 'workspace', persistentVolumeClaim: { claimName: `user-workspace-${userId}` } },
{ name: 'codi2-config', configMap: { name: 'codi2-config' } },
{ name: 'fdb-cluster-config', configMap: { name: 'fdb-cluster-config' } }
]
}
};
await this.k8sApi.createNamespacedPod(namespace, pod);
}
} -
Integrate with Registration Flow:
// In auth-service.ts
async register(email: string, password: string): Promise<User> {
// ... existing user creation ...
// NEW: Auto-provision pod
const podInfo = await this.podProvisioningService.createUserPod(
user.userId,
user.tenantId
);
// Store pod info in FDB
await this.fdb.set(`${user.tenantId}/pod/${user.userId}`, podInfo);
return user;
}
Deliverables:
- Pod provisioning service implemented
- Namespace creation working
- PVC creation working
- User pod deployment automated
- CODI2 running in user pods
Phase 5: Session Management (Week 5-6)
Goal: Multi-session isolation with FDB persistence (reuse v4 session model)
Tasks:
-
Port V4 Session Model:
// src/types/session.ts
export interface Session {
sessionId: string;
userId: string;
tenantId: string;
workspaceId: string;
podNamespace: string;
podName: string;
websocketConnected: boolean;
files: string[];
agents: string[];
createdAt: string;
lastAccessAt: string;
status: 'active' | 'suspended' | 'terminated';
} -
Create Session Service:
// src/services/session-service.ts
import { injectable, inject } from '@theia/core/shared/inversify';
import { FDBService } from './fdb-service';
import { v4 as uuidv4 } from 'uuid';
@injectable()
export class SessionService {
constructor(
@inject(FDBService) private fdb: FDBService
) {}
async createSession(userId: string, tenantId: string): Promise<Session> {
const sessionId = `ses-${uuidv4()}`;
const podInfo = await this.fdb.get(`${tenantId}/pod/${userId}`);
const session: Session = {
sessionId,
userId,
tenantId,
workspaceId: `wks-${Date.now()}`,
podNamespace: podInfo.namespace,
podName: podInfo.podName,
websocketConnected: false,
files: [],
agents: [],
createdAt: new Date().toISOString(),
lastAccessAt: new Date().toISOString(),
status: 'active'
};
// Store in FDB (v4 key pattern)
await this.fdb.set(`${tenantId}/session/${sessionId}`, session);
return session;
}
async switchSession(sessionId: string): Promise<Session> {
// Load session from FDB
const sessions = await this.fdb.scan('*/session/*');
const session = sessions.find(s => s.sessionId === sessionId);
if (!session) {
throw new Error('Session not found');
}
// Update last access
session.lastAccessAt = new Date().toISOString();
await this.fdb.set(`${session.tenantId}/session/${sessionId}`, session);
return session;
}
async listSessions(userId: string): Promise<Session[]> {
const allSessions = await this.fdb.scan('*/session/*');
return allSessions.filter(s => s.userId === userId);
}
} -
Integrate with theia Tabs:
// src/browser/session-widget/session-tab-widget.tsx
import * as React from 'react';
import { ReactWidget } from '@theia/core/lib/browser/widgets/react-widget';
import { injectable, inject } from '@theia/core/shared/inversify';
import { SessionService, Session } from '../../services/session-service';
@injectable()
export class SessionTabWidget extends ReactWidget {
@inject(SessionService) sessionService!: SessionService;
private session: Session | null = null;
async loadSession(sessionId: string) {
this.session = await this.sessionService.switchSession(sessionId);
this.update();
}
protected render(): React.ReactNode {
if (!this.session) {
return <div>Loading session...</div>;
}
return (
<div className="session-container">
<h3>Session: {this.session.sessionId}</h3>
<p>workspace: {this.session.workspaceId}</p>
<p>Pod: {this.session.podNamespace}/{this.session.podName}</p>
<p>Files: {this.session.files.length}</p>
<p>Status: {this.session.status}</p>
{/* File explorer, editor, terminal here */}
</div>
);
}
}
Deliverables:
- Session model ported from v4
- Session service implemented
- Session ↔ theia tab mapping
- Multi-session switching working
- Session persistence to FDB
Phase 6: Zero-Downtime Deployment (Week 6)
Goal: Deploy V5 alongside V4, migrate traffic gradually
Strategy: Blue-Green Deployment
Tasks:
-
Deploy V5 as Separate Service:
# k8s/v5-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: t2-v5-ide
namespace: coditect-app
spec:
replicas: 2
selector:
matchLabels:
app: t2-v5-ide
version: v5
template:
metadata:
labels:
app: t2-v5-ide
version: v5
spec:
containers:
- name: theia
image: gcr.io/serene-voltage-464305-n2/t2-theia-ide:latest
# ... same config as Phase 1 ...
---
apiVersion: v1
kind: Service
metadata:
name: t2-v5-ide
namespace: coditect-app
spec:
selector:
app: t2-v5-ide
version: v5
ports:
- port: 3000
targetPort: 3000 -
Update Ingress for Traffic Split:
# k8s/traffic-split-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: coditect-production-ingress
namespace: coditect-app
annotations:
# 90% traffic to v4, 10% to v5
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
rules:
- host: coditect.ai
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: coditect-frontend # V4
port:
number: 80
- path: /ide
pathType: Prefix
backend:
service:
name: t2-v5-ide # V5 (canary)
port:
number: 3000 -
Gradual Traffic Migration:
# Day 1: 10% traffic to V5
kubectl patch ingress coditect-production-ingress -n coditect-app \
-p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"10"}}}'
# Day 3: 50% traffic to V5
kubectl patch ingress coditect-production-ingress -n coditect-app \
-p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"50"}}}'
# Day 7: 100% traffic to V5
kubectl patch ingress coditect-production-ingress -n coditect-app \
-p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"100"}}}' -
Data Migration (If Needed):
# Export V4 FDB data
kubectl exec -n coditect-app foundationdb-0 -- fdbcli --exec "writemode on; backup start"
# V5 uses same FDB cluster, no migration needed
# Just ensure V5 services use same key patterns -
Decommission V4 (After 100% traffic on V5):
# Scale down V4 frontend
kubectl scale deployment coditect-frontend --replicas=0 -n coditect-app
# Remove V4 from ingress
kubectl patch ingress coditect-production-ingress -n coditect-app \
--type=json -p='[{"op": "remove", "path": "/spec/rules/0/http/paths/0"}]'
# Delete V4 deployment (optional, after 30 days)
kubectl delete deployment coditect-frontend -n coditect-app
Deliverables:
- V5 deployed alongside V4
- Traffic split configured (10%/50%/100%)
- Monitoring shows V5 health
- V4 decommissioned after successful migration
📋 Complete Task Checklist
Infrastructure Reuse
- Audit GKE cluster (codi-poc-e2-cluster)
- Verify FoundationDB cluster (3-node StatefulSet)
- Confirm SSL certificates (coditect.ai active)
- Check CI/CD pipelines (Cloud Build ready)
- Test FDB connection from local
- Verify container registry access
Phase 1: theia Deployment
- Create Dockerfile.theia
- Create cloudbuild-theia.yaml
- Build theia container image
- Push to gcr.io/serene-voltage-464305-n2
- Create k8s/theia-deployment.yaml
- Deploy theia to GKE
- Configure FDB connection in theia
- Update ingress for /ide route
- Test theia at https://coditect.ai/ide
Phase 2: CODI2 Integration
- Build CODI2 via Cloud Build
- Download CODI2 binary from GCS
- Test CODI2 with GKE FDB
- Create codi2-config ConfigMap
- Update workspace pod template with CODI2
- Deploy test user pod with CODI2
- Verify file monitoring events in FDB
Phase 3: Authentication
- Extract v4 auth handlers
- Port to TypeScript (auth-service.ts)
- Implement JWT validation
- Create login widget (React)
- Create register widget (React)
- Test user registration flow
- Test user login flow
- Verify users stored in FDB
Phase 4: Pod Provisioning
- Extract v4 Kubernetes code
- Port to TypeScript (pod-provisioning-service.ts)
- Implement namespace creation
- Implement PVC creation
- Implement pod deployment
- Integrate with registration
- Test end-to-end registration → pod creation
- Verify user pods running in GKE
Phase 5: Session Management
- Port v4 session model
- Create session service
- Implement session creation
- Implement session switching
- Create session tab widget
- Map sessions to theia tabs
- Test multi-session isolation
- Verify session persistence in FDB
Phase 6: Zero-Downtime Deployment
- Deploy V5 as separate service
- Configure traffic split (10%)
- Monitor V5 health metrics
- Increase traffic to 50%
- Monitor for 3 days
- Increase traffic to 100%
- Decommission V4 frontend
- Archive V4 deployments
🎯 Success Criteria
V5 MVP Complete When:
- ✅ theia IDE accessible at https://coditect.ai/ide
- ✅ Users can register and login
- ✅ GKE user pod auto-provisioned on registration
- ✅ CODI2 monitoring active in user pods
- ✅ Multi-session tabs working
- ✅ File changes logged to FDB audit trail
- ✅ All data persisted to existing FDB cluster
- ✅ Zero downtime during V4→V5 migration
- ✅ SSL certificates working on all domains
- ✅ Beta users can access via coditect.ai
🔗 Infrastructure Endpoints
| Service | URL | Purpose |
|---|---|---|
| Frontend (V4) | https://coditect.ai | React frontend (legacy) |
| theia IDE (V5) | https://coditect.ai/ide | New IDE interface |
| API | https://api.coditect.ai | Backend API (shared) |
| GKE Dashboard | GCP Console → Kubernetes | Cluster management |
| FDB Proxy | 10.128.0.8:4500 | Internal FDB access |
| Container Registry | gcr.io/serene-voltage-464305-n2 | Docker images |
| Build Artifacts | gs://serene-voltage-464305-n2-builds | CODI2 binaries |
🚀 Quick Commands
Deploy to Existing Infrastructure
# Set GCP project
gcloud config set project serene-voltage-464305-n2
gcloud config set billing/quota_project serene-voltage-464305-n2
# Get GKE credentials
gcloud container clusters get-credentials codi-poc-e2-cluster --zone us-central1-a
# Build theia image
cd /workspace/PROJECTS/t2
gcloud builds submit --config=cloudbuild-theia.yaml
# Deploy theia to GKE
kubectl apply -f k8s/theia-deployment.yaml -n coditect-app
# Check deployment
kubectl get pods -n coditect-app -l app=t2-theia-ide
# View logs
kubectl logs -l app=t2-theia-ide -n coditect-app --tail=50
# Test FDB connection
kubectl exec -n coditect-app -it $(kubectl get pods -n coditect-app -l app=t2-theia-ide -o jsonpath='{.items[0].metadata.name}') -- bash
# Inside pod:
cat /etc/foundationdb/fdb.cluster
Monitor V5 Deployment
# Watch pods
kubectl get pods -n coditect-app -w
# Check ingress
kubectl get ingress -n coditect-app
kubectl describe ingress coditect-production-ingress -n coditect-app
# Test endpoints
curl -I https://coditect.ai
curl -I https://coditect.ai/ide
curl -I https://api.coditect.ai
# Check FDB status
kubectl exec -n coditect-app foundationdb-0 -- fdbcli --exec "status"
📝 Next Steps
-
Immediate (This week):
- Create Dockerfile.theia and cloudbuild config
- Build theia container via Cloud Build
- Deploy theia to GKE namespace
coditect-app - Test FDB connection from theia pods
-
Short-term (Next 2 weeks):
- Compile CODI2 via existing Cloud Build
- Integrate CODI2 into workspace pod template
- Port v4 auth to theia (auth-service.ts)
- Create login/register UI widgets
-
Medium-term (Weeks 3-4):
- Port v4 pod provisioning to TypeScript
- Auto-create user pods on registration
- Implement session management
- Map sessions to theia tabs
-
Long-term (Weeks 5-6):
- Deploy V5 alongside V4
- Gradual traffic migration (10%→50%→100%)
- Decommission V4 after successful migration
- Launch beta to users
Questions Answered:
- ✅ GCP Project: serene-voltage-464305-n2 (reuse existing)
- ✅ FDB Cluster: Use existing GKE StatefulSet (coditect:production@10.128.0.8:4500)
- ✅ Container Registry: gcr.io/serene-voltage-464305-n2 (already set up)
- ✅ Domain: coditect.ai (keep existing, add /ide route for V5)
- ✅ Deployment Strategy: Blue-green with canary rollout (zero downtime)
Estimated Timeline: 6-8 weeks to full V5 deployment with zero downtime migration.