Skip to main content

ADR-012-v4: Code Generation Architecture - Part 2 (Technical)

Document Specification Block​

Document: ADR-012-v4-code-generation-part2-technical
Version: 1.0.0
Purpose: Constrain AI implementation with exact technical specifications for code generation
Audience: AI agents, developers implementing the system
Date Created: 2025-08-31
Date Modified: 2025-08-31
QA Review Date: Pending
Status: DRAFT

Table of Contents​

  1. Constraints
  2. Dependencies
  3. Component Architecture
  4. Data Models
  5. Implementation Patterns
  6. API Specifications
  7. Testing Requirements
  8. Performance Benchmarks
  9. Security Controls
  10. Logging and Error Handling
  11. References
  12. Approval Signatures

1. Constraints​

CONSTRAINT: Specification-Driven Generation​

All code generation MUST start from formal specifications (ADR, OpenAPI, GraphQL schema). Ad-hoc generation without specifications is forbidden.

CONSTRAINT: Quality Gates​

Generated code MUST pass ALL quality checks before delivery: syntax validation, test execution (≥95% coverage), security scanning, style compliance.

CONSTRAINT: Multi-Provider Resilience​

System MUST support multiple AI providers with automatic failover. Single provider dependency is forbidden.

CONSTRAINT: Template Versioning​

All templates MUST be versioned and tested. Using unversioned or untested templates is forbidden.

CONSTRAINT: Audit Trail​

Every generation request MUST be logged with full context for compliance and debugging. Anonymous generation is forbidden.

↑ Back to Top

2. Dependencies​

cargo.toml Dependencies​

[dependencies]
# Core async runtime
tokio = { version = "1.35", features = ["full"] }

# Web framework
actix-web = "4.4"
actix-rt = "2.9"

# Database
foundationdb = { version = "0.8", features = ["embedded-fdb-include"] }

# AI Provider SDKs
anthropic = { version = "0.5", features = ["async"] }
google-generativeai = { version = "0.3", features = ["tokio"] }
async-openai = "0.19"
ollama-rs = { version = "0.1", features = ["stream"] }

# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_yaml = "0.9"

# Template Engine
tera = "1.19"
handlebars = "5.1"

# Code Analysis
syn = { version = "2.0", features = ["full", "extra-traits"] }
tree-sitter = "0.20"
tree-sitter-rust = "0.20"

# Testing & Quality
cargo_metadata = "0.18"
insta = { version = "1.34", features = ["yaml"] }

# Error handling
anyhow = "1.0"
thiserror = "1.0"

# Utilities
uuid = { version = "1.6", features = ["v4", "serde"] }
chrono = { version = "0.4", features = ["serde"] }
tracing = "0.1"
regex = "1.10"
semver = "1.0"

[dev-dependencies]
mockall = "0.12"
proptest = "1.4"
criterion = { version = "0.5", features = ["html_reports"] }

↑ Back to Top

3. Component Architecture​

// File: src/services/code_generation_service.rs
use std::sync::Arc;
use anyhow::Result;
use uuid::Uuid;

pub struct CodeGenerationService {
spec_parser: Arc<SpecificationParser>,
context_builder: Arc<ContextBuilder>,
task_planner: Arc<TaskPlanner>,
ai_orchestrator: Arc<AIOrchestrator>,
quality_validator: Arc<QualityValidator>,
template_engine: Arc<TemplateEngine>,
audit_service: Arc<AuditService>,
}

// File: src/services/code_generation_service.rs
impl CodeGenerationService {
pub async fn generate_code(
&self,
request: GenerationRequest,
) -> Result<GenerationResult> {
// 1. Parse specifications
let spec = self.spec_parser.parse(&request.specification).await?;

// 2. Build context from existing code
let context = self.context_builder
.build_context(&spec, &request.tenant_id)
.await?;

// 3. Plan generation tasks
let tasks = self.task_planner.plan_tasks(&spec, &context).await?;

// 4. Execute generation with AI
let generated_code = self.ai_orchestrator
.execute_tasks(&tasks, &request.preferences)
.await?;

// 5. Validate quality
let validation_result = self.quality_validator
.validate(&generated_code)
.await?;

// 6. Apply templates for consistency
let final_code = self.template_engine
.apply_templates(&generated_code, &spec.template_id)
.await?;

// 7. Audit trail
self.audit_service
.log_generation(&request, &final_code, &validation_result)
.await?;

Ok(GenerationResult {
id: Uuid::new_v4(),
code: final_code,
validation: validation_result,
metrics: self.calculate_metrics(&final_code),
})
}
}

↑ Back to Top

4. Data Models​

Core Generation Models​

// File: src/models/generation.rs
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GenerationRequest {
pub id: Uuid,
pub tenant_id: Uuid,
pub user_id: Uuid,
pub specification: SpecificationInput,
pub preferences: GenerationPreferences,
pub metadata: HashMap<String, serde_json::Value>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum SpecificationInput {
ADR {
adr_id: String,
version: String,
},
OpenAPI {
spec: serde_json::Value,
operations: Vec<String>,
},
GraphQL {
schema: String,
operations: Vec<String>,
},
UserStory {
story: String,
acceptance_criteria: Vec<String>,
},
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GenerationPreferences {
pub language: ProgrammingLanguage,
pub framework: Option<String>,
pub style_guide: Option<String>,
pub test_framework: TestFramework,
pub ai_provider_preference: Option<AIProvider>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ProgrammingLanguage {
Rust,
Go,
Python,
TypeScript,
Java,
CSharp,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GenerationResult {
pub id: Uuid,
pub request_id: Uuid,
pub code: GeneratedCode,
pub validation: ValidationResult,
pub metrics: GenerationMetrics,
pub created_at: DateTime<Utc>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GeneratedCode {
pub files: Vec<GeneratedFile>,
pub tests: Vec<GeneratedFile>,
pub documentation: Vec<GeneratedFile>,
pub dependencies: DependencyManifest,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GeneratedFile {
pub path: String,
pub content: String,
pub language: String,
pub purpose: FilePurpose,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum FilePurpose {
Implementation,
UnitTest,
IntegrationTest,
Documentation,
Configuration,
}

Template System Models​

// File: src/models/template.rs
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CodeTemplate {
pub id: Uuid,
pub name: String,
pub version: semver::Version,
pub language: ProgrammingLanguage,
pub category: TemplateCategory,
pub template_content: String,
pub variables: Vec<TemplateVariable>,
pub test_coverage: f32,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum TemplateCategory {
Service,
Repository,
APIHandler,
Model,
Test,
Documentation,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TemplateVariable {
pub name: String,
pub var_type: VariableType,
pub required: bool,
pub default: Option<String>,
pub validation: Option<String>,
}

↑ Back to Top

5. Implementation Patterns​

AI Provider Orchestration​

// File: src/ai/orchestrator.rs
use async_trait::async_trait;

#[async_trait]
pub trait AIProvider: Send + Sync {
async fn generate(
&self,
prompt: &str,
context: &GenerationContext,
) -> Result<String>;

fn capabilities(&self) -> ProviderCapabilities;
fn cost_estimate(&self, tokens: usize) -> f64;
}

// File: src/ai/orchestrator.rs
pub struct AIOrchestrator {
providers: HashMap<String, Box<dyn AIProvider>>,
router: Arc<ProviderRouter>,
fallback_chain: Vec<String>,
}

impl AIOrchestrator {
pub async fn execute_tasks(
&self,
tasks: &[GenerationTask],
preferences: &GenerationPreferences,
) -> Result<GeneratedCode> {
let mut generated_files = Vec::new();

for task in tasks {
// Select optimal provider for task
let provider = self.router
.select_provider(task, preferences, &self.providers)
.await?;

// Execute with retry and fallback
let result = self.execute_with_fallback(
task,
provider,
&self.fallback_chain,
).await?;

generated_files.extend(result);
}

Ok(self.assemble_code(generated_files))
}

async fn execute_with_fallback(
&self,
task: &GenerationTask,
primary_provider: &str,
fallback_chain: &[String],
) -> Result<Vec<GeneratedFile>> {
// Try primary provider
match self.providers[primary_provider].generate(
&task.prompt,
&task.context,
).await {
Ok(result) => return Ok(self.parse_result(result)),
Err(e) => {
tracing::warn!("Primary provider failed: {}", e);
}
}

// Try fallback providers
for provider_name in fallback_chain {
if provider_name == primary_provider {
continue;
}

match self.providers[provider_name].generate(
&task.prompt,
&task.context,
).await {
Ok(result) => return Ok(self.parse_result(result)),
Err(e) => {
tracing::warn!("Fallback provider {} failed: {}",
provider_name, e);
}
}
}

Err(anyhow!("All providers failed"))
}
}

Quality Validation Pipeline​

// File: src/quality/validator.rs
pub struct QualityValidator {
syntax_checker: Arc<SyntaxChecker>,
test_runner: Arc<TestRunner>,
security_scanner: Arc<SecurityScanner>,
style_checker: Arc<StyleChecker>,
}

impl QualityValidator {
pub async fn validate(
&self,
code: &GeneratedCode,
) -> Result<ValidationResult> {
// Parallel validation
let (syntax, tests, security, style) = tokio::join!(
self.syntax_checker.check(&code.files),
self.test_runner.run_tests(&code.tests),
self.security_scanner.scan(&code.files),
self.style_checker.check(&code.files),
);

let mut result = ValidationResult {
passed: true,
syntax_errors: syntax?,
test_results: tests?,
security_issues: security?,
style_violations: style?,
coverage: self.calculate_coverage(&tests?),
};

// Enforce quality gates
if !result.syntax_errors.is_empty() {
result.passed = false;
}

if result.coverage < 95.0 {
result.passed = false;
}

if result.security_issues.iter().any(|i| i.severity == Severity::High) {
result.passed = false;
}

Ok(result)
}
}

↑ Back to Top

6. API Specifications​

Code Generation Endpoints​

// File: src/api/handlers/generation.rs
use actix_web::{web, HttpResponse, Result};

#[post("/api/v1/generate")]
pub async fn generate_code(
web::Json(request): web::Json<GenerationRequest>,
generation_service: web::Data<Arc<CodeGenerationService>>,
claims: Claims,
) -> Result<HttpResponse> {
// Validate tenant access
if claims.tenant_id != request.tenant_id {
return Ok(HttpResponse::Forbidden().json(json!({
"error": "tenant_mismatch",
"message": "Cannot generate code for another tenant"
})));
}

// Execute generation
match generation_service.generate_code(request.into_inner()).await {
Ok(result) => Ok(HttpResponse::Ok().json(result)),
Err(e) => match e.downcast_ref::<GenerationError>() {
Some(GenerationError::InvalidSpecification(msg)) => {
Ok(HttpResponse::BadRequest().json(json!({
"error": "invalid_specification",
"message": msg,
"suggestion": "Check specification format and required fields"
})))
},
Some(GenerationError::TemplateNotFound(template)) => {
Ok(HttpResponse::NotFound().json(json!({
"error": "template_not_found",
"message": format!("Template '{}' not found", template),
"available_templates": generation_service.list_templates()
})))
},
_ => Ok(HttpResponse::InternalServerError().json(json!({
"error": "generation_failed",
"message": "Code generation failed. Please try again."
})))
}
}
}

#[get("/api/v1/generation/{id}")]
pub async fn get_generation_result(
path: web::Path<Uuid>,
generation_service: web::Data<Arc<CodeGenerationService>>,
claims: Claims,
) -> Result<HttpResponse> {
let result = generation_service
.get_result(path.into_inner(), claims.tenant_id)
.await?;

Ok(HttpResponse::Ok().json(result))
}

#[post("/api/v1/generate/validate")]
pub async fn validate_specification(
web::Json(spec): web::Json<SpecificationInput>,
spec_parser: web::Data<Arc<SpecificationParser>>,
) -> Result<HttpResponse> {
match spec_parser.validate(&spec).await {
Ok(validation) => Ok(HttpResponse::Ok().json(validation)),
Err(e) => Ok(HttpResponse::BadRequest().json(json!({
"valid": false,
"errors": e.to_string()
})))
}
}

↑ Back to Top

7. Testing Requirements​

Test Coverage Requirements​

  • Unit Test Coverage: ≥95% of all code generation logic
  • Integration Test Coverage: ≥90% of AI provider integrations
  • Template Test Coverage: 100% of all templates must have tests
  • Quality Gate Tests: 100% of validation rules must be tested
  • Performance Tests: All generation pipelines must be benchmarked

Unit Tests​

// File: src/services/tests/code_generation_tests.rs
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::*;

#[tokio::test]
async fn test_generate_code_with_adr_spec() {
// Setup mocks
let mut spec_parser = MockSpecificationParser::new();
spec_parser
.expect_parse()
.with(eq(SpecificationInput::ADR {
adr_id: "ADR-001".to_string(),
version: "1.0.0".to_string(),
}))
.returning(|_| Ok(ParsedSpec::test_fixture()));

let service = CodeGenerationService::builder()
.spec_parser(Arc::new(spec_parser))
.build();

// Execute
let request = GenerationRequest::test_fixture();
let result = service.generate_code(request).await.unwrap();

// Verify
assert_eq!(result.validation.passed, true);
assert!(result.validation.coverage >= 95.0);
assert!(result.code.files.len() > 0);
}

#[tokio::test]
async fn test_ai_provider_fallback() {
let mut primary = MockAIProvider::new();
primary
.expect_generate()
.returning(|_, _| Err(anyhow!("Service unavailable")));

let mut fallback = MockAIProvider::new();
fallback
.expect_generate()
.returning(|_, _| Ok("generated code".to_string()));

let orchestrator = AIOrchestrator::builder()
.add_provider("claude", Box::new(primary))
.add_provider("gemini", Box::new(fallback))
.fallback_chain(vec!["claude", "gemini"])
.build();

let result = orchestrator.execute_tasks(&tasks, &prefs).await;
assert!(result.is_ok());
}
}

Integration Tests​

// File: tests/integration/generation_workflow.rs
#[tokio::test]
async fn test_full_generation_workflow() {
let app = setup_test_app().await;

// Submit generation request
let request = json!({
"specification": {
"type": "openapi",
"spec": load_openapi_fixture(),
"operations": ["createUser", "getUser"]
},
"preferences": {
"language": "rust",
"framework": "actix-web",
"test_framework": "tokio-test"
}
});

let response = app.post("/api/v1/generate")
.json(&request)
.send()
.await
.unwrap();

assert_eq!(response.status(), 200);

let result: GenerationResult = response.json().await.unwrap();

// Verify generated code compiles
let temp_dir = create_temp_project(&result.code);
let compile_result = Command::new("cargo")
.current_dir(&temp_dir)
.arg("build")
.output()
.expect("Failed to compile");

assert!(compile_result.status.success());

// Verify tests pass
let test_result = Command::new("cargo")
.current_dir(&temp_dir)
.arg("test")
.output()
.expect("Failed to run tests");

assert!(test_result.status.success());
}

↑ Back to Top

8. Performance Benchmarks​

Required Performance Metrics​

// File: benches/generation_benchmarks.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn benchmark_code_generation(c: &mut Criterion) {
let runtime = tokio::runtime::Runtime::new().unwrap();
let service = setup_generation_service();

c.bench_function("generate_simple_crud", |b| {
b.to_async(&runtime).iter(|| async {
let request = create_crud_request();
let result = service.generate_code(black_box(request)).await;
assert!(result.is_ok());
})
});

c.bench_function("generate_complex_service", |b| {
b.to_async(&runtime).iter(|| async {
let request = create_complex_service_request();
let result = service.generate_code(black_box(request)).await;
assert!(result.is_ok());
})
});
}

// Performance requirements
const MAX_GENERATION_TIME_MS: u64 = 30_000; // 30 seconds
const MAX_SIMPLE_CRUD_MS: u64 = 5_000; // 5 seconds
const MAX_VALIDATION_TIME_MS: u64 = 2_000; // 2 seconds
const MIN_THROUGHPUT_PER_SECOND: f64 = 10.0; // requests/second

↑ Back to Top

9. Security Controls​

Generation Security​

// File: src/security/generation_security.rs
pub struct GenerationSecurityManager {
scanner: Arc<SecurityScanner>,
sanitizer: Arc<CodeSanitizer>,
policy_engine: Arc<PolicyEngine>,
}

impl GenerationSecurityManager {
pub async fn validate_request(
&self,
request: &GenerationRequest,
claims: &Claims,
) -> Result<()> {
// Check tenant isolation
if request.tenant_id != claims.tenant_id {
return Err(SecurityError::TenantViolation);
}

// Validate specification doesn't contain malicious content
self.scanner.scan_specification(&request.specification).await?;

// Check user permissions for requested generation
self.policy_engine.check_permission(
claims,
"code:generate",
&request.specification,
).await?;

Ok(())
}

pub async fn sanitize_generated_code(
&self,
code: &mut GeneratedCode,
) -> Result<()> {
for file in &mut code.files {
// Remove any credentials or secrets
file.content = self.sanitizer.remove_secrets(&file.content)?;

// Validate no malicious patterns
if self.scanner.detect_malicious_patterns(&file.content).await? {
return Err(SecurityError::MaliciousPattern);
}
}

Ok(())
}
}

// Template security
impl TemplateEngine {
pub fn validate_template_security(&self, template: &str) -> Result<()> {
// Prevent template injection
let disallowed = [
"eval(", "exec(", "__import__", "subprocess",
"os.system", "Runtime.exec"
];

for pattern in &disallowed {
if template.contains(pattern) {
return Err(SecurityError::UnsafeTemplate(pattern.to_string()));
}
}

Ok(())
}
}

↑ Back to Top

10. Logging and Error Handling​

Structured Logging​

// File: src/logging/generation_logging.rs
use tracing::{info, warn, error, instrument};

#[instrument(skip(self, request), fields(
request_id = %request.id,
tenant_id = %request.tenant_id,
spec_type = ?request.specification.spec_type()
))]
pub async fn generate_code(&self, request: GenerationRequest) -> Result<GenerationResult> {
info!(
user_id = %request.user_id,
language = ?request.preferences.language,
"Starting code generation"
);

let start_time = Instant::now();

match self.internal_generate(request).await {
Ok(result) => {
info!(
duration_ms = start_time.elapsed().as_millis(),
files_generated = result.code.files.len(),
test_coverage = result.validation.coverage,
"Code generation completed successfully"
);
Ok(result)
}
Err(e) => {
error!(
error = %e,
duration_ms = start_time.elapsed().as_millis(),
"Code generation failed"
);
Err(e)
}
}
}

Error Types and Handling​

// File: src/errors/generation_errors.rs
use thiserror::Error;

#[derive(Error, Debug)]
pub enum GenerationError {
#[error("Invalid specification: {0}")]
InvalidSpecification(String),

#[error("Template not found: {0}")]
TemplateNotFound(String),

#[error("All AI providers failed. Primary: {primary}, Tried: {tried:?}")]
AllProvidersFailed {
primary: String,
tried: Vec<String>,
},

#[error("Quality gate failed: {reason}")]
QualityGateFailed { reason: String },

#[error("Security violation: {0}")]
SecurityViolation(String),

#[error("Generation timeout after {seconds} seconds")]
Timeout { seconds: u64 },
}

impl ResponseError for GenerationError {
fn error_response(&self) -> HttpResponse {
match self {
Self::InvalidSpecification(msg) => {
HttpResponse::BadRequest().json(json!({
"error": "invalid_specification",
"message": msg,
"suggestion": "Review specification format and required fields"
}))
}
Self::AllProvidersFailed { primary, tried } => {
HttpResponse::ServiceUnavailable().json(json!({
"error": "providers_unavailable",
"message": "All AI providers are currently unavailable",
"primary": primary,
"attempted": tried,
"retry_after": 300
}))
}
Self::QualityGateFailed { reason } => {
HttpResponse::UnprocessableEntity().json(json!({
"error": "quality_gate_failed",
"message": format!("Generated code did not meet quality standards: {}", reason),
"suggestion": "Try adjusting generation parameters or templates"
}))
}
_ => HttpResponse::InternalServerError().json(json!({
"error": "generation_failed",
"message": self.to_string()
}))
}
}
}

↑ Back to Top

11. References​

Version Compatibility​

  • FoundationDB: 7.1.0+ for distributed template storage
  • Rust: 1.75.0+ for async trait improvements
  • AI SDKs: Latest stable versions
  • Template Engines: Tera 1.19+, Handlebars 5.1+

↑ Back to Top

12. Approval Signatures​

Technical Sign-off​

ComponentOwnerApprovedDate
ArchitectureSession5✓2025-08-31
ImplementationPending--
Security ReviewPending--
Performance TestPending--

Implementation Checklist​

  • Core generation service implemented
  • AI provider integrations complete
  • Template system operational
  • Quality validators functional
  • API endpoints tested
  • Performance benchmarks met
  • Security controls validated
  • Monitoring and logging configured

↑ Back to Top