Analysis Mode
Analyze code for: $ARGUMENTS
System Prompt
⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:
- IMMEDIATELY execute - no questions, no explanations first
- ALWAYS show full output from script/tool execution
- ALWAYS provide summary after execution completes
DO NOT:
- Say "I don't need to take action" - you ALWAYS execute when invoked
- Ask for confirmation unless
requires_confirmation: truein frontmatter - Skip execution even if it seems redundant - run it anyway
The user invoking the command IS the confirmation.
Usage
# Analyze a specific file
/analyze src/services/auth.ts
# Analyze architecture
/analyze "system architecture for payment module"
# Analyze for specific concerns
/analyze src/api/ --focus performance,security
# Analyze code quality
/analyze "code quality of utils directory"
Analysis Framework
Evaluation Criteria
Use the evaluation-framework skill with comprehensive rubrics:
Code Quality (if analyzing implementation):
- Correctness (30%)
- Code Structure (20%)
- Error Handling (15%)
- Documentation (10%)
- Type Safety (10%)
- Performance (10%)
- Security (5%)
Architecture (if analyzing design):
- Scalability (25%)
- Maintainability (20%)
- Observability (15%)
- Fault Tolerance (15%)
- Security (15%)
- Documentation (10%)
Multi-Agent Systems (if analyzing agents):
- Coordination Efficiency (25%)
- Error Cascade Prevention (20%)
- Token Economics (15%)
- Observability (15%)
- Delegation Clarity (15%)
- Checkpoint/Resume (10%)
Output Format
# Analysis Report
**Overall Score**: X.X/5.0
## Summary
[Brief assessment of strengths and weaknesses]
## Detailed Scores
### Criterion 1
**Score**: X/5 (Level)
**Justification**: [Evidence-based explanation]
**Examples**: [Quote specific code]
**Improvements**: [Actionable suggestions]
[... repeat for all criteria ...]
## Priority Improvements
1. [Most impactful change]
2. [Second priority]
3. [Third priority]
## Security Issues
[If any CRITICAL/HIGH security concerns found]
Integration
- Auto-load:
evaluation-frameworkskill (LLM-as-judge, rubrics) - Auto-load:
production-patternsskill (identify missing patterns) - Auto-load:
framework-patternsskill (architecture analysis)
Required Tools
| Tool | Purpose | Required |
|---|---|---|
Read | Access source code for analysis | Yes |
Glob | Find files in target scope | Yes |
Grep | Search for patterns and issues | Yes |
Bash | Run linters, security scanners | Optional |
Auto-Loaded Skills:
evaluation-framework- LLM-as-judge rubricsproduction-patterns- Production readiness patternsframework-patterns- Architecture analysis
Quality Gates
| Aspect | Threshold | Action |
|---|---|---|
| Missing criteria | Any | Reject - analyze all aspects |
| No examples | Any criterion | Reject - quote specific code |
| Vague feedback | Any | Reject - be specific and actionable |
Best Practices
✅ DO:
- Quote specific code examples
- Provide actionable, specific feedback
- Rank issues by priority/severity
- Consider context and constraints
- Include positive feedback (strengths)
- Reference production patterns
❌ DON'T:
- Be vague ("could be better")
- Only provide scores without justification
- Skip security analysis
- Ignore performance implications
- Forget to suggest concrete improvements
Action Policy
<default_behavior> This command analyzes and recommends without making changes. Provides:
- Comprehensive code/architecture analysis with structural insights
- Specific issues identified with severity and impact assessment
- Detailed recommendations with concrete improvement strategies
- Security and performance implications evaluation
- Architectural quality metrics and patterns analysis
User decides which analysis recommendations to implement. </default_behavior>
Success Output
When analysis completes successfully:
✅ COMMAND COMPLETE: /analyze
Target: <path or scope>
Score: X.X/5.0
Issues: N (Critical: X, High: Y)
Report: Displayed
Output Validation
Before completing, verify output contains:
- Overall score (X.X/5.0)
- Summary with strengths and weaknesses
- Detailed scores per criterion with:
- Score (X/5)
- Justification with evidence
- Specific code examples quoted
- Actionable improvements
- Priority improvements list (ranked)
- Security issues section (if any Critical/High found)
- Issue counts (Critical/High/Medium/Low)
Completion Checklist
Before marking complete:
- Target code/architecture read
- Evaluation criteria applied
- Scores calculated with justification
- Specific improvements listed
- Security issues highlighted
Failure Indicators
This command has FAILED if:
- ❌ Target path not found
- ❌ No criteria applied
- ❌ Vague feedback without examples
- ❌ Missing security analysis
When NOT to Use
Do NOT use when:
- File doesn't exist
- Need quick overview (use /what)
- Want to implement changes (use /implement)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Vague feedback | Not actionable | Quote specific code |
| Skip security | Miss vulnerabilities | Always check security |
| No prioritization | Unclear next steps | Rank by severity |
Principles
This command embodies:
- #9 Based on Facts - Evidence-based scoring
- #6 Clear, Understandable - Specific examples
- #10 Research When in Doubt - Comprehensive analysis
Full Standard: CODITECT-STANDARD-AUTOMATION.md