Skip to main content

Implementation Mode

Build production-ready implementation for: $ARGUMENTS

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

# Implement a feature
/implement "user authentication with JWT"

# Implement with specific patterns
/implement "rate limiter" --pattern circuit-breaker

# Implement API endpoint
/implement "REST API for user management"

# Implement with tests
/implement "payment processing service" --with-tests

Implementation Requirements

Code Quality Standards

  • Full error handling with circuit breakers on external calls
  • Async/await patterns - No blocking I/O
  • Type hints on all functions (Python 3.11+)
  • Observability hooks - Metrics, logging, tracing
  • Checkpoint capabilities for long-running operations
  • Unit tests included with >80% coverage
  • Inline documentation with examples
  • Migration notes in comments where applicable

Architecture Patterns

  • Code style: Production (not prototype/toy examples)
  • Framework: Event-driven where applicable
  • Error handling: Fail-fast with detailed context
  • Concurrency: Bulkheads and timeout boundaries

Integration

  • Auto-load: production-patterns skill (circuit breakers, error handling)
  • Auto-load: rust-backend-patterns skill (if Rust)
  • Auto-load: foundationdb-queries skill (if FDB)
  • Auto-load: framework-patterns skill (if event-driven/FSM)

Quality Gates

AspectThresholdAction
Type coverage< 100%Reject - add type hints
Error handling< 100%Reject - add try/catch
Tests< 80% coverageReject - add tests
ObservabilityMissingReject - add metrics/logs
Circuit breakersMissing on externalReject - add protection

Output Structure

# 1. Imports with type hints
from typing import Optional, List
import asyncio

# 2. Error classes
class OperationError(ApplicationError):
"""Specific error with context"""
pass

# 3. Implementation with full error handling
async def operation(input: str) -> Result:
"""
Operation description with examples.

Args:
input: Description

Returns:
Result description

Raises:
OperationError: When X happens
"""
# Observability
metrics.increment("operation_calls", tags={"type": "user"})

# Circuit breaker for external calls
try:
result = await circuit_breaker.call(external_call)
except CircuitBreakerOpenError:
logger.warning("Circuit open, using cache")
return cached_result

# Return
return result

# 4. Tests
async def test_operation_success():
result = await operation("test")
assert result.status == "success"

async def test_operation_failure():
with pytest.raises(OperationError):
await operation("invalid")

Best Practices

DO:

  • Include circuit breakers on all external calls
  • Use async/await (no blocking time.sleep())
  • Add observability hooks (metrics.increment, logger.info)
  • Implement retries with exponential backoff
  • Include comprehensive tests
  • Document edge cases and error conditions

DON'T:

  • Skip error handling "for brevity"
  • Use synchronous I/O
  • Forget type hints
  • Skip tests
  • Ignore observability
  • Use toy/prototype code patterns

Required Tools

ToolPurposeRequired
WriteCreate new implementation filesYes
EditModify existing codeYes
ReadAnalyze codebase patternsYes
BashRun tests and lintingYes
GlobFind related filesOptional

Auto-Loaded Skills:

  • production-patterns - Circuit breakers, error handling
  • rust-backend-patterns - If Rust implementation
  • foundationdb-queries - If FDB integration
  • framework-patterns - If event-driven/FSM

Output Validation

Before marking complete, verify output contains:

  • All type hints present on functions
  • Error handling for all operations
  • Circuit breakers on external calls
  • Test file with >80% coverage
  • Observability hooks (metrics, logging)
  • Code compiles/lints clean
  • Docstrings with examples

Action Policy

<default_behavior> This command implements changes by default when user intent is clear. Proceeds with:

  • Code generation/modification
  • File creation/updates
  • Configuration changes
  • Git operations (if applicable)

Provides concise progress updates during execution. </default_behavior>

After execution, verify: - Files created/modified as intended - Code compiles/tests pass (if applicable) - Git changes committed (if applicable) - Next recommended step provided

Success Output

When implementation completes successfully:

✅ COMMAND COMPLETE: /implement
Feature: <description>
Files: N created/modified
Tests: M passing
Coverage: X%
Next: /commit or continue

Completion Checklist

Before marking complete:

  • Code implemented with types
  • Error handling added
  • Tests written (>80% coverage)
  • Observability hooks added
  • Code compiles/lints clean

Failure Indicators

This command has FAILED if:

  • ❌ Missing type hints
  • ❌ No error handling
  • ❌ No tests included
  • ❌ Code doesn't compile
  • ❌ Missing observability

When NOT to Use

Do NOT use when:

  • Just prototyping (quality standards too high)
  • Need analysis only (use /analyze)
  • Quick fix (use direct editing)

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Skip testsLow confidenceAlways include tests
Skip typesRuntime errorsAdd type hints
No observabilityBlind in productionAdd metrics/logs

Principles

This command embodies:

  • #3 Complete Execution - Full implementation with tests
  • #9 Based on Facts - Quality gates enforced
  • #4 Separation of Concerns - Clean architecture

Full Standard: CODITECT-STANDARD-AUTOMATION.md