Skip to main content

Optimization Mode

Optimize performance for: $ARGUMENTS

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

# Optimize a specific file
/optimize src/services/data-processing.ts

# Optimize database queries
/optimize "database queries in user service"

# Optimize for specific concern
/optimize src/api/ --focus caching

# Full performance audit
/optimize --audit --report

Optimization Framework

Analysis Areas

  1. Algorithm Complexity

    • Time complexity (O(n) analysis)
    • Space complexity
    • Better algorithm alternatives
  2. Database Performance

    • Query optimization
    • Index analysis
    • N+1 query detection
    • Connection pooling
  3. Async Patterns

    • Blocking I/O detection
    • Concurrency opportunities
    • Batching potential
  4. Caching

    • Cache hit ratio
    • Cache invalidation strategy
    • Cache key design
  5. Resource Usage

    • Memory allocation
    • CPU usage
    • Network calls

Optimization Process

Phase 1: Measure

# Add timing instrumentation
with TimingContext(metrics, "operation_name"):
result = expensive_operation()

# Profile memory
import tracemalloc
tracemalloc.start()
result = operation()
current, peak = tracemalloc.get_traced_memory()
print(f"Memory: {peak / 10**6:.1f} MB")

Phase 2: Identify Bottlenecks

  • Find slowest operations
  • Identify most frequent calls
  • Locate memory leaks
  • Detect blocking I/O

Phase 3: Optimize

# BEFORE: O(n²) nested loops
for user in users:
for order in orders:
if order.user_id == user.id:
process(user, order)

# AFTER: O(n) with hash map
orders_by_user = {}
for order in orders:
orders_by_user.setdefault(order.user_id, []).append(order)

for user in users:
for order in orders_by_user.get(user.id, []):
process(user, order)

Phase 4: Verify

  • Benchmark before/after
  • Monitor in production
  • A/B test if possible

Output Format

# Performance Optimization Report

## Summary
- **Target**: [What was optimized]
- **Improvement**: [X% faster / Y% less memory]
- **Impact**: [Production implications]

## Bottleneck Analysis

### Bottleneck 1: [Description]
**Current Performance**: X ms / Y operations
**Root Cause**: [Explanation]
**Proposed Fix**: [Solution]
**Expected Improvement**: [Estimated gain]

## Optimizations Applied

### Optimization 1: [Name]
**Before**:
```python
# Slow version

After:

# Fast version

Benchmark Results:

  • Before: X ms
  • After: Y ms
  • Improvement: Z% faster

Recommendations

  1. [High priority optimization]
  2. [Medium priority]
  3. [Low priority / future consideration]

### Integration
- Auto-load: `production-patterns` skill (async patterns)
- Use: Profiling tools (cProfile, memory_profiler)
- Use: Benchmarking (timeit, pytest-benchmark)

## Best Practices

✅ **DO**:
- Measure before optimizing (no premature optimization)
- Benchmark improvements
- Document tradeoffs
- Consider maintainability impact
- Test correctness after optimization
- Profile in production-like conditions

❌ **DON'T**:
- Optimize without measuring first
- Sacrifice correctness for speed
- Ignore algorithm complexity
- Forget to benchmark
- Optimize non-bottlenecks
- Skip testing after optimization
---

## Action Policy

<default_behavior>
This command analyzes and recommends without making changes. Provides:
- Detailed analysis of current state
- Specific recommendations with justification
- Prioritized action items
- Risk assessment

User decides which recommendations to implement.
</default_behavior>

<verification>
After analysis, provide:
- Analysis completeness (all aspects covered)
- Recommendation confidence levels
- Specific examples from codebase
- Clear next steps for user
</verification>

## Success Output

When optimize completes:

✅ COMMAND COMPLETE: /optimize Target: Bottlenecks: identified Improvements:

Expected Gain: % faster


## Completion Checklist

Before marking complete:
- [ ] Performance measured
- [ ] Bottlenecks identified
- [ ] Optimizations documented
- [ ] Benchmarks provided

## Failure Indicators

This command has FAILED if:
- ❌ No target specified
- ❌ Cannot measure performance
- ❌ No improvements found
- ❌ Missing benchmarks

## When NOT to Use

**Do NOT use when:**
- Code not profiled yet
- Premature optimization
- Correctness issues exist

## Anti-Patterns (Avoid)

| Anti-Pattern | Problem | Solution |
|--------------|---------|----------|
| Optimize without measuring | Wasted effort | Profile first |
| Ignore algorithm complexity | Poor results | Check Big-O |
| Skip benchmarking | Unverified gains | Always benchmark |

## Principles

This command embodies:
- **#3 Complete Execution** - Full analysis
- **#9 Based on Facts** - Measurement-based
- **#5 First Principles** - Root cause focus

**Full Standard:** [CODITECT-STANDARD-AUTOMATION.md](pathname://coditect-core-standards/CODITECT-STANDARD-AUTOMATION.md)