CODITECT Step Dev Platform - Testing Strategy
Last Updated: 2026-02-07 Coverage Target: 80%+ (enforced via cargo-tarpaulin) Quality Gate: B+ or higher (82/100)
Quick Start
# Run all tests
cargo test --workspace
# Run tests with output
cargo test --workspace -- --nocapture
# Run specific crate tests
cargo test -p coditect-runtime
cargo test -p coditect-control-plane
cargo test -p coditect-audit-chain
cargo test -p coditect-shared
# Generate coverage report
cargo tarpaulin --workspace --out Html --output-dir coverage/
open coverage/index.html
# Check coverage threshold
cargo tarpaulin --workspace --fail-under 80
Test Organization
Crate Structure
crates/
├── runtime/
│ ├── src/
│ │ ├── executor.rs # Inline tests: #[cfg(test)] mod tests
│ │ ├── events.rs # Inline tests: #[cfg(test)] mod tests
│ │ └── ...
│ └── tests/ # Integration tests (if needed)
├── control-plane/
│ ├── src/
│ │ └── ...
│ └── tests/
│ ├── api_contracts.rs # API contract tests
│ └── tenant_isolation.rs # Security isolation tests
├── audit-chain/
│ └── src/
│ └── lib.rs # Comprehensive inline tests
└── shared/
└── src/
├── rbac.rs # Extensive RBAC permission matrix tests
└── ...
Test Location Guidelines
| Test Type | Location | Example |
|---|---|---|
| Unit Tests | Inline #[cfg(test)] modules | crates/runtime/src/executor.rs |
| Integration Tests | tests/ directory | crates/control-plane/tests/api_contracts.rs |
| Helper Functions | test_helpers module in lib.rs | control_plane::test_helpers::build_test_router() |
Test Categories
1. Unit Tests (Inline)
Purpose: Test individual functions, structs, and methods in isolation.
Guidelines:
- Place in
#[cfg(test)] mod tests {}at bottom of each module - Test both happy path AND error cases
- Use descriptive test names:
test_<function>_<scenario>_<expected>
Example:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_step_validation_event_step_missing_subscribes() {
let config = base_config(StepType::Event);
// subscribes is empty - should fail
let result = validate_step_config(&config);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("subscribe"));
}
}
2. Integration Tests (tests/ directory)
Purpose: Test interactions between components, API contracts, and system behavior.
Guidelines:
- Place in
crates/<name>/tests/*.rs - Use
test_helpersmodule for setup - Test realistic scenarios (multi-component workflows)
Example:
// crates/control-plane/tests/api_contracts.rs
use control_plane::test_helpers::build_test_router;
use axum::http::{Request, StatusCode};
use tower::ServiceExt; // for oneshot()
#[tokio::test]
async fn test_health_endpoint_returns_200() {
let app = build_test_router();
let response = app
.oneshot(Request::builder().uri("/health").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
}
3. Property-Based Tests (Advanced)
Purpose: Verify invariants hold for wide input ranges.
Guidelines:
- Use
proptestcrate for critical paths - Define properties (invariants that must always hold)
- Let library generate test cases
Example:
use proptest::prelude::*;
proptest! {
#[test]
fn audit_chain_always_verifies_when_built_correctly(
actions in prop::collection::vec(any::<String>(), 0..100)
) {
let chain = AuditChain;
let mut blocks = vec![AuditChain::genesis("t1")];
for action in &actions {
let prev = blocks.last().unwrap();
let next = chain.append(prev, "t1", "user", action).unwrap();
blocks.push(next);
}
prop_assert!(verify_chain(&blocks));
}
}
Test Patterns
Pattern 1: Builder Pattern for Test Data
pub struct TenantBuilder {
name: String,
region: String,
}
impl TenantBuilder {
pub fn new() -> Self {
Self {
name: "Test Tenant".into(),
region: "us-east1".into(),
}
}
pub fn with_name(mut self, name: impl Into<String>) -> Self {
self.name = name.into();
self
}
pub fn build(self) -> Tenant {
Tenant {
id: TenantId::new("test-tenant"),
name: self.name,
residency_region: self.region,
}
}
}
// Usage:
let tenant = TenantBuilder::new()
.with_name("Acme Corp")
.build();
Pattern 2: Test Fixtures
fn test_ids() -> (TenantId, ProjectId) {
(TenantId::new("t1"), ProjectId::new("p1"))
}
fn base_config(step_type: StepType) -> StepConfig {
StepConfig {
name: "test-step".into(),
step_type,
flows: vec!["default".into()],
file_path: PathBuf::from("steps/test.ts"),
// ... sensible defaults
}
}
Pattern 3: Async Test Setup
#[tokio::test]
async fn test_event_subscription_workflow() {
// Given: Event adapter with subscriber
let adapter = InMemoryEventAdapter::new();
let mut sub = adapter.subscribe("order.created").await.unwrap();
// When: Event is emitted
let (tid, pid) = test_ids();
let event = Event::new(&tid, &pid, "order.created", json!({"order_id": "o1"}));
adapter.emit(event.clone()).await.unwrap();
// Then: Subscriber receives it
let received = sub.receiver.recv().await.unwrap();
assert_eq!(received.topic, "order.created");
assert_eq!(received.tenant_id, "t1");
}
Pattern 4: Error Case Testing
#[test]
fn test_validation_fails_for_invalid_input() {
let invalid_inputs = vec![
("", "Empty string should fail"),
("x".repeat(1000), "Too long should fail"),
("invalid@@@format", "Invalid format should fail"),
];
for (input, description) in invalid_inputs {
let result = validate_input(&input);
assert!(result.is_err(), "{}", description);
}
}
Coverage Requirements
Per-Crate Targets
| Crate | Target | Current | Status |
|---|---|---|---|
runtime | 80% | ~75% | 🟡 In Progress |
control-plane | 80% | ~70% | 🟡 In Progress |
audit-chain | 90% | ~90% | ✅ Excellent |
shared | 85% | ~85% | ✅ Good |
Coverage Exclusions
Excluded from coverage calculation:
- Test modules (
#[cfg(test)]) - Generated code (e.g., protobuf bindings)
- Main binaries (
src/main.rs) - integration tested separately - Development tools and scripts
Running Tests in CI/CD
GitHub Actions Example
# .github/workflows/test.yml
name: Test & Coverage
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Install tarpaulin
run: cargo install cargo-tarpaulin
- name: Run tests with coverage
run: |
cargo tarpaulin --workspace --out Lcov --fail-under 80
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
files: ./lcov.info
Testing Best Practices
✅ DO
-
Test error paths, not just happy paths
#[test]
fn test_division_by_zero_returns_error() {
assert!(divide(10, 0).is_err());
} -
Use descriptive test names
#[test]
fn test_user_creation_with_duplicate_email_returns_conflict_error() { } -
Test boundary conditions
#[test]
fn test_timeout_ms_validation_rejects_zero() { }
#[test]
fn test_timeout_ms_validation_rejects_above_max() { } -
Isolate tests (no shared mutable state)
#[tokio::test]
async fn test_creates_own_event_adapter() {
let adapter = InMemoryEventAdapter::new(); // Fresh instance
// Test uses only this adapter
} -
Use fixtures for consistent test data
fn test_ids() -> (TenantId, ProjectId) {
(TenantId::new("t1"), ProjectId::new("p1"))
}
❌ DON'T
-
Don't test implementation details
// Bad: Testing internal field
assert_eq!(executor.semaphore_internal_state, 5);
// Good: Testing observable behavior
assert_eq!(executor.available_permits(), 5); -
Don't use unwrap() in tests without context
// Bad:
let result = do_thing().unwrap();
// Good:
let result = do_thing().expect("Thing should succeed with valid input"); -
Don't write flaky tests (timing-dependent)
// Bad:
tokio::time::sleep(Duration::from_millis(100)).await;
assert!(condition_is_true());
// Good:
tokio::time::timeout(Duration::from_secs(1), async {
while !condition_is_true() {
tokio::time::sleep(Duration::from_millis(10)).await;
}
}).await.expect("Condition should become true"); -
Don't ignore test failures without documenting why
// Bad:
#[ignore]
#[test]
fn test_something() { }
// Good:
#[ignore = "Requires database - run with cargo test -- --ignored"]
#[test]
fn test_database_integration() { }
Debugging Failed Tests
View Test Output
# Show println! and dbg! output
cargo test -- --nocapture
# Show test names as they run
cargo test -- --show-output
# Run specific test
cargo test test_name
# Run tests matching pattern
cargo test executor::tests
Async Test Debugging
#[tokio::test]
async fn test_with_tracing() {
// Enable tracing for test
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.init();
// Test code with tracing::debug!() calls will now show output
}
Test Documentation
Documenting Test Scenarios
/// Tests that concurrent step execution respects the semaphore limit.
///
/// Scenario:
/// 1. Create executor with max 5 concurrent executions
/// 2. Spawn 20 step executions simultaneously
/// 3. Verify only 5 run concurrently (15 wait for permits)
/// 4. Verify all 20 eventually complete successfully
///
/// Expected: No deadlocks, all executions complete, max 5 concurrent.
#[tokio::test]
async fn test_executor_concurrent_limit_enforcement() {
// Test implementation...
}
Performance Testing
Benchmarking (Optional)
# Install criterion
cargo install cargo-criterion
# Run benchmarks
cargo criterion
Example benchmark:
// benches/executor_bench.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn executor_throughput(c: &mut Criterion) {
c.bench_function("executor_10_steps", |b| {
b.iter(|| {
// Benchmark code
})
});
}
criterion_group!(benches, executor_throughput);
criterion_main!(benches);
Security Testing
Tenant Isolation Tests
Critical security requirement: Tenants MUST NOT access each other's data.
#[tokio::test]
async fn test_tenant_isolation_project_access() {
let pool = test_db_pool().await;
let proj_repo = ProjectRepository::new(pool);
// Tenant A creates project
let proj = proj_repo.create(&TenantId::new("tenant-a"), "proj-a").await.unwrap();
// Tenant B attempts access - MUST FAIL
let result = proj_repo.get_with_tenant(&proj.id, &TenantId::new("tenant-b")).await;
assert!(result.is_err());
assert!(matches!(result.unwrap_err(), PlatformError::NotFound(_)));
}
Test Maintenance
Updating Tests When Code Changes
- Change breaks tests? Good! Tests caught a regression.
- Update test to match new behavior (if intentional change)
- Document why test changed in commit message
- Verify coverage didn't drop (
cargo tarpaulin)
Refactoring Tests
When tests become hard to read:
- Extract common setup to helper functions
- Use builder pattern for test data
- Group related tests in modules
Example refactor:
// Before: Repetitive setup
#[test]
fn test_a() {
let config = StepConfig { name: "test".into(), step_type: StepType::Event, ... };
// Test A
}
#[test]
fn test_b() {
let config = StepConfig { name: "test".into(), step_type: StepType::Event, ... };
// Test B
}
// After: Shared fixture
fn base_config() -> StepConfig {
StepConfig { name: "test".into(), step_type: StepType::Event, ... }
}
#[test]
fn test_a() {
let config = base_config();
// Test A
}
#[test]
fn test_b() {
let config = base_config();
// Test B
}
Resources
Documentation
Internal Docs
Questions? Contact the QA team or open a GitHub discussion.