Skip to main content

Analysis Mode

Analyze code for: $ARGUMENTS

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

# Analyze a specific file
/analyze src/services/auth.ts

# Analyze architecture
/analyze "system architecture for payment module"

# Analyze for specific concerns
/analyze src/api/ --focus performance,security

# Analyze code quality
/analyze "code quality of utils directory"

Analysis Framework

Evaluation Criteria

Use the evaluation-framework skill with comprehensive rubrics:

Code Quality (if analyzing implementation):

  • Correctness (30%)
  • Code Structure (20%)
  • Error Handling (15%)
  • Documentation (10%)
  • Type Safety (10%)
  • Performance (10%)
  • Security (5%)

Architecture (if analyzing design):

  • Scalability (25%)
  • Maintainability (20%)
  • Observability (15%)
  • Fault Tolerance (15%)
  • Security (15%)
  • Documentation (10%)

Multi-Agent Systems (if analyzing agents):

  • Coordination Efficiency (25%)
  • Error Cascade Prevention (20%)
  • Token Economics (15%)
  • Observability (15%)
  • Delegation Clarity (15%)
  • Checkpoint/Resume (10%)

Output Format

# Analysis Report

**Overall Score**: X.X/5.0

## Summary
[Brief assessment of strengths and weaknesses]

## Detailed Scores

### Criterion 1
**Score**: X/5 (Level)
**Justification**: [Evidence-based explanation]
**Examples**: [Quote specific code]
**Improvements**: [Actionable suggestions]

[... repeat for all criteria ...]

## Priority Improvements
1. [Most impactful change]
2. [Second priority]
3. [Third priority]

## Security Issues
[If any CRITICAL/HIGH security concerns found]

Integration

  • Auto-load: evaluation-framework skill (LLM-as-judge, rubrics)
  • Auto-load: production-patterns skill (identify missing patterns)
  • Auto-load: framework-patterns skill (architecture analysis)

Required Tools

ToolPurposeRequired
ReadAccess source code for analysisYes
GlobFind files in target scopeYes
GrepSearch for patterns and issuesYes
BashRun linters, security scannersOptional

Auto-Loaded Skills:

  • evaluation-framework - LLM-as-judge rubrics
  • production-patterns - Production readiness patterns
  • framework-patterns - Architecture analysis

Quality Gates

AspectThresholdAction
Missing criteriaAnyReject - analyze all aspects
No examplesAny criterionReject - quote specific code
Vague feedbackAnyReject - be specific and actionable

Best Practices

DO:

  • Quote specific code examples
  • Provide actionable, specific feedback
  • Rank issues by priority/severity
  • Consider context and constraints
  • Include positive feedback (strengths)
  • Reference production patterns

DON'T:

  • Be vague ("could be better")
  • Only provide scores without justification
  • Skip security analysis
  • Ignore performance implications
  • Forget to suggest concrete improvements

Action Policy

<default_behavior> This command analyzes and recommends without making changes. Provides:

  • Comprehensive code/architecture analysis with structural insights
  • Specific issues identified with severity and impact assessment
  • Detailed recommendations with concrete improvement strategies
  • Security and performance implications evaluation
  • Architectural quality metrics and patterns analysis

User decides which analysis recommendations to implement. </default_behavior>

After analysis completion, verify: - All requested aspects analyzed comprehensively - Issues categorized by type and severity - Concrete improvements suggested (not abstract) - Security implications evaluated - Performance characteristics assessed - Architectural patterns identified - Code quality metrics provided - Next steps clearly prioritized

Success Output

When analysis completes successfully:

✅ COMMAND COMPLETE: /analyze
Target: <path or scope>
Score: X.X/5.0
Issues: N (Critical: X, High: Y)
Report: Displayed

Output Validation

Before completing, verify output contains:

  • Overall score (X.X/5.0)
  • Summary with strengths and weaknesses
  • Detailed scores per criterion with:
    • Score (X/5)
    • Justification with evidence
    • Specific code examples quoted
    • Actionable improvements
  • Priority improvements list (ranked)
  • Security issues section (if any Critical/High found)
  • Issue counts (Critical/High/Medium/Low)

Completion Checklist

Before marking complete:

  • Target code/architecture read
  • Evaluation criteria applied
  • Scores calculated with justification
  • Specific improvements listed
  • Security issues highlighted

Failure Indicators

This command has FAILED if:

  • ❌ Target path not found
  • ❌ No criteria applied
  • ❌ Vague feedback without examples
  • ❌ Missing security analysis

When NOT to Use

Do NOT use when:

  • File doesn't exist
  • Need quick overview (use /what)
  • Want to implement changes (use /implement)

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Vague feedbackNot actionableQuote specific code
Skip securityMiss vulnerabilitiesAlways check security
No prioritizationUnclear next stepsRank by severity

Principles

This command embodies:

  • #9 Based on Facts - Evidence-based scoring
  • #6 Clear, Understandable - Specific examples
  • #10 Research When in Doubt - Comprehensive analysis

Full Standard: CODITECT-STANDARD-AUTOMATION.md