HOW-TO: Create a New Slash Command
Estimated Time: 20-40 minutes (simple command: 20 min, complex with agents: 40 min) Difficulty: Beginner to Intermediate Prerequisites: Basic Markdown knowledge, understanding of CODITECT framework
Overview
This guide walks you through creating a new slash command for the CODITECT framework from scratch. By the end, you'll have a production-ready command that users can invoke with /your-command.
What You'll Learn:
- How to structure command files with YAML frontmatter
- How to name commands following verb-noun patterns
- How to define arguments and integrate with agents
- How to write Action Policy sections for clear behavior
- How to test and validate your command
What You'll Build:
A working /analyze-code command that delegates to the codebase-analyzer agent for comprehensive code review.
Prerequisites
Before you begin, ensure you have:
- CODITECT framework installed and configured
- Access to
.claude/commands/directory - Basic understanding of Markdown syntax
- Familiarity with YAML frontmatter
- Read CODITECT-STANDARD-COMMANDS.md (recommended)
Knowledge Requirements:
- YAML frontmatter (CRITICAL) - Commands MUST start with YAML metadata
- Kebab-case naming - Use
verb-noun.mdpattern (e.g.,analyze-code.md) - Action Policy - Every command needs behavior definitions
Step 1: Define Command Purpose
Time: 5 minutes
Before writing any code, clearly define what your command does and why it's needed.
Questions to Answer
-
What problem does this solve?
- Example: "We need a quick way to analyze code quality without manual review"
-
Who will use this?
- Developers, QA engineers, code reviewers
-
What are the inputs?
- File paths, code sections, or analysis criteria
-
What are the outputs?
- Analysis report with scores, issues, recommendations
-
How does this fit into existing workflows?
- Integrates with code review process, pre-commit checks
Example: analyze-code Command
Purpose: Provide comprehensive code analysis with quality, security, and performance assessment
Users: Developers performing code reviews
Inputs: File paths or directories to analyze
Outputs: Structured analysis report with scores and recommendations
Workflow: Run before commits, during PRs, or on-demand for any code
Document Your Answers
Create a brief specification document:
# Command Specification: analyze-code
## Purpose
Analyze code for quality, security, and performance issues using the codebase-analyzer agent.
## User Story
As a developer, I want to quickly analyze code quality so that I can identify issues before code review.
## Inputs
- Target files or directories (via $ARGUMENTS)
- Optional: analysis criteria (security, performance, architecture)
## Outputs
- Overall quality score (A-F)
- Detailed findings by category
- Prioritized recommendations
## Integration
- Delegates to codebase-analyzer agent
- Uses evaluation-framework skill for scoring
Step 2: Choose Command Name
Time: 3 minutes
Command names MUST follow the verb-noun pattern with kebab-case.
Naming Guidelines
Pattern: {verb}-{noun}.md
Good Verbs:
analyze- Examine and evaluategenerate- Create new contentvalidate- Check correctnessexecute- Run operationscreate- Make new artifactssetup- Initialize/configure
Good Nouns:
- Specific domain objects (code, config, report)
- Clear scope indicators (submodule, environment)
Examples
| ✅ Good Names | ❌ Bad Names | Why Bad? |
|---|---|---|
analyze-code | analyzeCode | camelCase (wrong) |
generate-report | report | Missing verb |
validate-config | config_validate | snake_case (wrong) |
create-checkpoint | do-checkpoint | Vague verb |
setup-environment | env-setup | Noun-verb (wrong order) |
Our Example
Command Name: analyze-code
Filename: analyze-code.md
Invocation: /analyze-code
Rationale:
analyze- Clear action verbcode- Specific domain object- Total: 12 characters (well under 64 limit)
- Easy to remember and type
Step 3: Create Command File
Time: 2 minutes
Create the command file in the correct location with YAML frontmatter.
Directory Structure
Choose the appropriate scope:
| Scope | Directory | When to Use |
|---|---|---|
| Project | .claude/commands/ | Project-specific workflows |
| User | ~/.claude/commands/ | Personal commands for all projects |
| Namespaced | .claude/commands/{namespace}/ | Organized collections |
Create the File
# Navigate to project commands directory
cd /path/to/your/project/.claude/commands/
# Create the command file
touch analyze-code.md
# Open in your editor
nano analyze-code.md
# or
code analyze-code.md
Initial File Structure
Start with this template:
---
name: analyze-code
description: TODO - Add brief description
---
# Analyze Code
TODO - Add command description
## Usage
/analyze-code $ARGUMENTS
## Action Policy
<default_behavior>
TODO - Define behavior
</default_behavior>
<verification>
TODO - Define verification
</verification>
Step 4: Write YAML Frontmatter
Time: 3 minutes
CRITICAL: Every command MUST begin with YAML frontmatter.
Required Fields
---
name: analyze-code
description: Analyze code for quality, security, and performance issues
---
Optional Fields (Recommended)
---
name: analyze-code
description: Analyze code for quality, security, and performance issues
version: 1.0.0
author: Your Team
tags: ["analysis", "quality", "security", "code-review"]
model: sonnet
allowed-tools: ["Read", "Grep", "Task"]
---
Field Descriptions
| Field | Required | Description | Example |
|---|---|---|---|
name | ✅ Yes | Command identifier (same as filename) | analyze-code |
description | ✅ Yes | One-sentence description (80 chars max) | Analyze code for quality issues |
version | ⏸️ No | Semantic version | 1.0.0 |
author | ⏸️ No | Creator name or team | CODITECT Team |
tags | ⏸️ No | Search/filter tags | ["analysis", "quality"] |
model | ⏸️ No | Preferred Claude model | sonnet (default), opus, haiku |
allowed-tools | ⏸️ No | Restrict available tools | ["Read", "Grep", "Task"] |
Our Example
---
name: analyze-code
description: Comprehensive code analysis with quality and security assessment
version: 1.0.0
tags: ["analysis", "quality", "security", "review"]
model: sonnet
allowed-tools: ["Read", "Grep", "Task"]
---
Why These Choices:
model: sonnet- Balance of quality and speed for analysis tasksallowed-tools- Read files, search code, invoke agentstags- Searchable by domain (analysis, quality, security)
Step 5: Write Command Description
Time: 5 minutes
Write a clear H1 heading and explanation of what the command does.
Structure
# Command Name
[2-3 sentence description of what the command does and when to use it]
## What This Command Does
[Detailed explanation with bullet points]
## When to Use This Command
[Specific use cases]
Our Example
# Analyze Code
Comprehensive code analysis using the codebase-analyzer agent to evaluate quality, security, and performance. Provides detailed reports with severity ratings, specific issues, and actionable recommendations.
## What This Command Does
This command performs deep code analysis by:
- Evaluating code quality (correctness, structure, documentation)
- Identifying security vulnerabilities and risks
- Analyzing performance characteristics and bottlenecks
- Checking error handling and type safety
- Generating prioritized improvement recommendations
## When to Use This Command
Use `/analyze-code` when you need to:
- **Pre-commit review** - Check code quality before committing
- **PR reviews** - Comprehensive analysis for pull requests
- **Refactoring** - Identify technical debt and improvement opportunities
- **Security audit** - Find potential vulnerabilities
- **Performance check** - Discover optimization opportunities
Step 6: Define Arguments
Time: 4 minutes
Specify how users provide input to your command.
Argument Methods
Choose the method that fits your needs:
Method 1: $ARGUMENTS (Simple Freeform)
For flexible, unstructured input:
## Usage
/analyze-code $ARGUMENTS
## Examples
/analyze-code src/components/Button.tsx /analyze-code security vulnerabilities /analyze-code performance bottlenecks in auth module
Method 2: Positional Arguments
For specific, structured inputs:
## Usage
/analyze-code $1 $2
## Arguments
- `$1` - File path or directory to analyze
- `$2` - Analysis focus (quality|security|performance)
## Examples
/analyze-code src/components/Button.tsx quality /analyze-code src/auth security
Method 3: Handlebars Templates (Advanced)
For conditional logic and flags:
## Usage
/analyze-code --target
## Arguments
{{#if target}}
Target: {{target}}
{{else}}
Target: Current directory
{{/if}}
{{#if focus}}
Analysis Focus: {{focus}}
{{else}}
Analysis Focus: Comprehensive (all aspects)
{{/if}}
{{#if detailed}}
Report Level: Detailed with code examples
{{else}}
Report Level: Summary only
{{/if}}
Our Example (Method 1: Simple)
## Usage
/analyze-code $ARGUMENTS
## Arguments
The command accepts freeform arguments specifying:
- **File paths** - Specific files or directories to analyze
- **Analysis focus** - Keywords like "security", "performance", "architecture"
- **Scope modifiers** - "critical only", "high priority", etc.
**Examples:**
```bash
# Analyze specific file
/analyze-code src/components/Button.tsx
# Analyze with focus
/analyze-code security issues in authentication
# Analyze directory
/analyze-code src/services/
# Combined scope
/analyze-code src/auth/ security and performance
Default Behavior: If no arguments provided, analyzes the current working directory with comprehensive analysis.
---
## Step 7: Document Integration
**Time:** 5 minutes
Explain how your command integrates with agents, skills, or other commands.
### Integration Patterns
#### Pattern 1: Agent Delegation
```markdown
## Integration
### Agent Integration
This command delegates to the **codebase-analyzer** agent for analysis:
**Agent Responsibilities:**
- Code structure analysis
- Security vulnerability scanning
- Performance profiling
- Best practices validation
**Task Invocation:**
Task(subagent_type="codebase-analyzer", description="Analyze code quality", prompt="Perform comprehensive code analysis for [target] focusing on [criteria]")
**Agent Output:**
- Quality scores by category
- Detailed findings with code examples
- Prioritized recommendations
- Compliance reports
Pattern 2: Skill Integration
## Integration
### Skills Used
- **evaluation-framework** - LLM-as-judge scoring with rubrics
- **production-patterns** - Identify missing best practices
- **framework-patterns** - Architecture analysis
Pattern 3: Multi-Command Workflows
## Integration
### Related Commands
This command works well with:
- `/validate-config` - Validate before analysis
- `/generate-report` - Create formatted analysis reports
- `/create-checkpoint` - Save analysis results
**Example Workflow:**
```bash
/validate-config # Step 1: Verify configuration
/analyze-code src/ # Step 2: Analyze codebase
/generate-report analysis # Step 3: Create report
### Our Example
```markdown
## Integration
### Agent Integration
This command delegates analysis to the **codebase-analyzer** subagent:
**Why This Agent:**
- Specialized in code structure and architecture analysis
- Trained on code quality best practices
- Provides evidence-based assessments with code quotes
**Invocation Pattern:**
Use the codebase-analyzer subagent to perform comprehensive code analysis:
- Target: [user-specified files/directories]
- Focus: [quality, security, performance, or comprehensive]
- Output: Structured analysis report with scores and recommendations
**Agent Capabilities:**
- ✅ Code quality assessment (correctness, structure, documentation)
- ✅ Security vulnerability identification (OWASP Top 10, common CVEs)
- ✅ Performance analysis (algorithmic complexity, resource usage)
- ✅ Architecture evaluation (scalability, maintainability)
- ✅ Error handling review (exception patterns, edge cases)
### Skills Auto-Loaded
The codebase-analyzer agent automatically uses:
- `evaluation-framework` - Structured scoring with rubrics
- `production-patterns` - Best practices validation
- `framework-patterns` - Architecture pattern recognition
Step 8: Write Action Policy
Time: 5 minutes
CRITICAL: Every command MUST include Action Policy sections.
Required Sections
Every command needs BOTH sections:
<default_behavior>- What happens WITHOUT user confirmation<verification>- What to check AFTER execution
Behavior Categories
| Category | Description | User Approval |
|---|---|---|
| Read-only | No modifications | Not needed |
| Interactive | Prompts for decisions | Per-action |
| Automated | Executes with approval | Upfront |
| Destructive | Modifies/deletes | Explicit confirmation |
Template
## Action Policy
<default_behavior>
This command [behavior category]. Provides:
- [What it does/outputs]
- [What it analyzes/creates]
- [What it recommends]
User decides [what actions to take based on output].
</default_behavior>
<verification>
After execution, verify:
- [Verification criterion 1]
- [Verification criterion 2]
- [Verification criterion 3]
- [Verification criterion 4]
</verification>
Our Example
## Action Policy
<default_behavior>
This command analyzes and recommends without making changes. Provides:
- Comprehensive code analysis with structural insights
- Issue identification with severity ratings (CRITICAL/HIGH/MEDIUM/LOW)
- Specific recommendations with justification and code examples
- Security implications and vulnerability assessment
- Performance characteristics and optimization opportunities
- Architectural quality metrics and pattern analysis
User decides which recommendations to implement. Command performs read-only analysis.
</default_behavior>
<verification>
After analysis completion, verify:
- All requested code sections analyzed comprehensively
- Issues categorized by type (quality/security/performance) and severity
- Concrete improvements suggested with specific code examples (not abstract)
- Security implications evaluated against OWASP Top 10
- Performance characteristics assessed (time/space complexity)
- Architectural patterns identified and evaluated
- Code quality metrics provided (maintainability, readability, testability)
- Next steps clearly prioritized by impact and effort
</verification>
Step 9: Add Examples
Time: 3 minutes
Provide 2-3 realistic examples showing how to use your command.
Example Structure
## Examples
### Example 1: [Use Case Title]
**Scenario:** [Brief description of when/why to use this]
**Command:**
```bash
/command-name arguments
Expected Output:
- [What happens]
- [What the user sees]
Next Steps:
- [What to do with the output]
### Our Example
```markdown
## Examples
### Example 1: Quick File Analysis
**Scenario:** You've just modified a React component and want to check for issues before committing.
**Command:**
```bash
/analyze-code src/components/UserProfile.tsx
Expected Output:
# Analysis Report: UserProfile.tsx
**Overall Score**: 4.2/5.0 (Grade: B)
## Summary
Strong component structure with good TypeScript usage. Minor issues with error handling
and missing accessibility attributes.
## Detailed Scores
### Code Quality: 4.5/5 (Excellent)
- Clear component structure
- Proper TypeScript interfaces
- Good prop validation
### Security: 3.8/5 (Good)
⚠️ Issue: XSS vulnerability in user bio rendering
- Line 47: `dangerouslySetInnerHTML` used without sanitization
- Recommendation: Use DOMPurify or render as text
### Performance: 4.3/5 (Very Good)
- Efficient re-rendering with React.memo
- Proper dependency arrays in useEffect
## Priority Improvements
1. 🔴 HIGH: Sanitize user bio before rendering (security)
2. 🟡 MEDIUM: Add ARIA labels to profile actions (accessibility)
3. 🟢 LOW: Extract inline styles to CSS modules (maintainability)
Next Steps:
- Fix XSS vulnerability by adding DOMPurify
- Add ARIA attributes for accessibility
- Run tests:
/execute-tests src/components/UserProfile.test.tsx
Example 2: Security-Focused Analysis
Scenario: Preparing for security audit and need to identify vulnerabilities in authentication module.
Command:
/analyze-code src/auth/ security vulnerabilities
Expected Output:
# Security Analysis Report: src/auth/
**Security Score**: 3.2/5.0 (Needs Improvement)
## Critical Findings
### 🔴 CRITICAL: SQL Injection Vulnerability
**File:** `src/auth/database.ts:142`
**Issue:** Raw SQL query with string interpolation
```typescript
// VULNERABLE CODE
const query = `SELECT * FROM users WHERE email = '${userEmail}'`;
Recommendation: Use parameterized queries
// SECURE CODE
const query = 'SELECT * FROM users WHERE email = ?';
db.query(query, [userEmail]);
🔴 CRITICAL: Weak Password Hashing
File: src/auth/password.ts:28
Issue: Using MD5 for password hashing
Recommendation: Migrate to bcrypt with cost factor 12+
🟠 HIGH: Missing Rate Limiting
File: src/auth/login.ts:67
Issue: Login endpoint has no rate limiting
Recommendation: Implement rate limiting (5 attempts per 15 min)
Compliance Check
- ✅ Password minimum length enforced (8+ chars)
- ✅ HTTPS enforced for authentication endpoints
- ❌ No multi-factor authentication
- ❌ Session tokens not rotated on privilege escalation
- ❌ No account lockout after failed attempts
Immediate Actions Required
- 🔴 Fix SQL injection (URGENT - exploitable)
- 🔴 Upgrade password hashing algorithm
- 🟠 Implement rate limiting on auth endpoints
- 🟠 Add MFA for admin accounts
**Next Steps:**
1. Create security fixes branch: `git checkout -b fix/auth-vulnerabilities`
2. Fix critical issues (SQL injection, password hashing)
3. Run security tests: `/security-test src/auth/`
4. Document changes in security advisory
---
### Example 3: Performance Analysis
**Scenario:** Application feels slow, need to identify performance bottlenecks.
**Command:**
```bash
/analyze-code src/services/DataService.ts performance
Expected Output:
# Performance Analysis: DataService.ts
**Performance Score**: 2.8/5.0 (Needs Optimization)
## Performance Bottlenecks
### 🔴 O(n²) Algorithm Detected
**Location:** `fetchAndProcessData()` - Line 89
**Issue:** Nested loops over large datasets
```typescript
// CURRENT (O(n²))
data.forEach(item => {
results.forEach(result => {
if (item.id === result.id) { /* ... */ }
});
});
Impact: 15 seconds for 10,000 items Optimization: Use Map for O(n) lookup
// OPTIMIZED (O(n))
const resultMap = new Map(results.map(r => [r.id, r]));
data.forEach(item => {
const result = resultMap.get(item.id);
if (result) { /* ... */ }
});
Expected Improvement: ~150ms (100x faster)
🟠 Memory Leak
Location: subscribeToUpdates() - Line 234
Issue: Event listeners not cleaned up
Fix: Add cleanup in useEffect return
🟡 Unnecessary Re-renders
Location: DataTable component - Line 156
Issue: Missing React.memo, re-renders on every parent update
Fix: Wrap with React.memo and useMemo for data
Optimization Recommendations
| Priority | Optimization | Impact | Effort |
|---|---|---|---|
| 🔴 High | Replace nested loops with Map | 100x faster | 30 min |
| 🟠 High | Fix memory leak | Prevents crashes | 15 min |
| 🟡 Medium | Add React.memo | 60% fewer renders | 10 min |
| 🟢 Low | Lazy load data service | Faster initial load | 45 min |
Next Steps
- Profile with Chrome DevTools to confirm bottleneck
- Implement Map-based lookup
- Add performance tests
- Monitor production metrics after deployment
Step 10: Test Your Command
Time: 5 minutes
Test your command to ensure it works correctly.
Testing Checklist
- Syntax Check - File saved without errors
- Discovery - Command appears in
/autocomplete - Basic Invocation - Command runs without arguments
- With Arguments - Command processes arguments correctly
- Error Handling - Command handles invalid input gracefully
- Output Quality - Results are formatted and useful
- Integration - Agents/skills invoked correctly
Testing Process
Step 1: Verify Discovery
# In Claude Code, type:
/analyze
# Your command should appear in autocomplete
# Shows: /analyze-code (project) - Comprehensive code analysis...
Step 2: Test Basic Invocation
/analyze-code
Expected: Command executes with default behavior (analyze current directory)
Step 3: Test With Arguments
/analyze-code src/components/Button.tsx
Expected: Analyzes specific file
Step 4: Test Error Handling
/analyze-code /nonexistent/path
Expected: Graceful error message, not crash
Step 5: Validate Output
Check that output includes:
- Overall assessment
- Specific findings
- Code examples
- Recommendations
- Next steps
Step 11: Validate Against Standard
Time: 3 minutes
Ensure your command meets CODITECT-STANDARD-COMMANDS.md requirements.
Validation Checklist
Structure (25 points)
- (5 pts) YAML frontmatter with name and description
- (5 pts) Kebab-case verb-noun filename
- (5 pts) Clear Markdown structure with headings
- (5 pts) Required sections present
- (5 pts) Logical organization
Documentation (25 points)
- (5 pts) Clear command description
- (5 pts) Usage section with syntax
- (5 pts) Arguments documented
- (5 pts) Minimum 2 examples
- (5 pts) Workflow/process explained
Integration (20 points)
- (7 pts) Agent integration specified
- (7 pts) Skill integration documented
- (6 pts) Integration patterns clear
Usability (15 points)
- (5 pts) Name is clear and descriptive
- (5 pts) Arguments easy to understand
- (5 pts) Error cases addressed
Action Policy (15 points)
- (8 pts)
<default_behavior>present and clear - (7 pts)
<verification>present with criteria
Total: 100 points
Grade:
- 90-100: A (Exemplary)
- 80-89: B (Production-ready)
- 70-79: C (Functional, improvements needed)
- Below 70: Needs work
Step 12: Commit to Repository
Time: 2 minutes
Add your command to version control with a proper commit message.
Git Workflow
# Stage the command file
git add .claude/commands/analyze-code.md
# Commit with conventional format
git commit -m "feat: Add /analyze-code command for comprehensive code analysis
- YAML frontmatter with name, description, tags
- Delegates to codebase-analyzer agent
- Supports security, performance, architecture focus
- Action Policy sections for behavior clarity
- Three realistic examples with expected output
- Grade: B (production-ready)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push to remote
git push origin main
Commit Message Format
<type>(<scope>): <subject>
<body>
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Types:
feat- New commandfix- Bug fix in existing commanddocs- Documentation updatesrefactor- Command restructuring
Step 13: Document in README (Optional)
Time: 2 minutes
If your project has a commands README, add your command to the index.
Commands README Entry
## Analysis Commands
### /analyze-code
**Purpose:** Comprehensive code analysis with quality and security assessment
**Usage:**
```bash
/analyze-code [target-files] [focus-area]
Examples:
/analyze-code src/components/Button.tsx
/analyze-code src/auth/ security
Output: Detailed analysis report with scores and recommendations
Grade: B (Production-ready)
---
## Complete Working Example
Here's the full `analyze-code.md` command file:
```markdown
---
name: analyze-code
description: Comprehensive code analysis with quality and security assessment
version: 1.0.0
tags: ["analysis", "quality", "security", "review"]
model: sonnet
allowed-tools: ["Read", "Grep", "Task"]
---
# Analyze Code
Comprehensive code analysis using the codebase-analyzer agent to evaluate quality, security, and performance. Provides detailed reports with severity ratings, specific issues, and actionable recommendations.
## What This Command Does
This command performs deep code analysis by:
- Evaluating code quality (correctness, structure, documentation)
- Identifying security vulnerabilities and risks
- Analyzing performance characteristics and bottlenecks
- Checking error handling and type safety
- Generating prioritized improvement recommendations
## When to Use This Command
Use `/analyze-code` when you need to:
- **Pre-commit review** - Check code quality before committing
- **PR reviews** - Comprehensive analysis for pull requests
- **Refactoring** - Identify technical debt and improvement opportunities
- **Security audit** - Find potential vulnerabilities
- **Performance check** - Discover optimization opportunities
## Usage
/analyze-code $ARGUMENTS
## Arguments
The command accepts freeform arguments specifying:
- **File paths** - Specific files or directories to analyze
- **Analysis focus** - Keywords like "security", "performance", "architecture"
- **Scope modifiers** - "critical only", "high priority", etc.
**Examples:**
```bash
# Analyze specific file
/analyze-code src/components/Button.tsx
# Analyze with focus
/analyze-code security issues in authentication
# Analyze directory
/analyze-code src/services/
# Combined scope
/analyze-code src/auth/ security and performance
Default Behavior: If no arguments provided, analyzes the current working directory with comprehensive analysis.
Integration
Agent Integration
This command delegates analysis to the codebase-analyzer subagent:
Why This Agent:
- Specialized in code structure and architecture analysis
- Trained on code quality best practices
- Provides evidence-based assessments with code quotes
Invocation Pattern:
Use the codebase-analyzer subagent to perform comprehensive code analysis:
- Target: [user-specified files/directories]
- Focus: [quality, security, performance, or comprehensive]
- Output: Structured analysis report with scores and recommendations
Agent Capabilities:
- ✅ Code quality assessment (correctness, structure, documentation)
- ✅ Security vulnerability identification (OWASP Top 10, common CVEs)
- ✅ Performance analysis (algorithmic complexity, resource usage)
- ✅ Architecture evaluation (scalability, maintainability)
- ✅ Error handling review (exception patterns, edge cases)
Skills Auto-Loaded
The codebase-analyzer agent automatically uses:
evaluation-framework- Structured scoring with rubricsproduction-patterns- Best practices validationframework-patterns- Architecture pattern recognition
Examples
[... include the three examples from Step 9 ...]
Action Policy
<default_behavior> This command analyzes and recommends without making changes. Provides:
- Comprehensive code analysis with structural insights
- Issue identification with severity ratings (CRITICAL/HIGH/MEDIUM/LOW)
- Specific recommendations with justification and code examples
- Security implications and vulnerability assessment
- Performance characteristics and optimization opportunities
- Architectural quality metrics and pattern analysis
User decides which recommendations to implement. Command performs read-only analysis. </default_behavior>
Quick Reference Checklist
Use this checklist when creating new commands:
Planning Phase
- Define command purpose and use cases
- Identify target users
- Specify inputs and outputs
- Choose verb-noun name (kebab-case)
File Creation
- Create
.claude/commands/{name}.mdfile - Add YAML frontmatter (name, description required)
- Write H1 heading with command name
- Add command description (2-3 sentences)
Core Sections
- Document usage syntax
- Define arguments (if applicable)
- Add 2-3 realistic examples
- Write Action Policy sections
-
<default_behavior>block -
<verification>block
-
Integration
- Document agent integration (if applicable)
- Document skill integration (if applicable)
- List related commands
Validation
- Test command discovery (
/autocomplete) - Test basic invocation (no args)
- Test with arguments
- Test error handling
- Validate output quality
- Check against standard (aim for Grade B: 80%+)
Finalization
- Commit to git with conventional format
- Push to remote repository
- Update commands README (if applicable)
- Notify team (optional)
Troubleshooting
Issue: Command Not Appearing in Autocomplete
Symptoms: Type / but your command doesn't show
Causes:
- Missing YAML frontmatter
- Invalid YAML syntax
- File not saved with
.mdextension - File in wrong directory
Solutions:
# Check file exists
ls -la .claude/commands/your-command.md
# Validate YAML syntax
head -10 .claude/commands/your-command.md
# Ensure proper extension
mv your-command.txt your-command.md
Issue: $ARGUMENTS Not Substituting
Symptoms: Command shows literal $ARGUMENTS text
Causes:
- Claude Code version doesn't support placeholder
- Syntax error in command file
Solutions:
- Upgrade Claude Code to latest version
- Use alternative:
[user-provided-arguments]as placeholder text - Switch to Handlebars templates for more control
Issue: Agent Not Being Invoked
Symptoms: Command output doesn't include agent results
Causes:
- Agent not activated in component-activation-status.json
- Incorrect invocation syntax
- Agent name misspelled
Solutions:
# Check agent activation
python3 scripts/update-component-activation.py status agent codebase-analyzer
# Activate if needed
python3 scripts/update-component-activation.py activate agent codebase-analyzer \
--reason "Required by /analyze-code command"
# Verify agent name
ls -la agents/ | grep codebase
Issue: Command Too Slow
Symptoms: Command takes >2 minutes or times out
Causes:
- Complex workflow without streaming
- Large file processing
- Synchronous agent invocations
Solutions:
- Break into smaller commands
- Add progress indicators
- Use parallel agent invocations
- Implement streaming for large outputs
- Add timeout handling
Issue: Poor Output Quality
Symptoms: Command works but results aren't useful
Causes:
- Vague agent instructions
- Missing context in prompts
- No validation of outputs
Solutions:
- Be specific in agent prompts (include examples)
- Provide full context (file paths, code snippets)
- Add verification criteria in Action Policy
- Include quality gates in command logic
Additional Resources
Official Documentation
Community Examples
- Claude Command Suite (see config/component-counts.json)
- WShobson Commands (see config/component-counts.json)
Related Guides
Summary
You now know how to:
- ✅ Plan and design effective slash commands
- ✅ Structure commands with proper YAML frontmatter
- ✅ Define arguments ($ARGUMENTS, positional, Handlebars)
- ✅ Integrate with agents and skills
- ✅ Write clear Action Policy sections
- ✅ Test and validate commands
- ✅ Commit with proper git workflow
Next Steps:
- Create your first command following this guide
- Share with team for feedback
- Iterate based on usage patterns
- Document common patterns for reuse
Remember: Commands should be clear, focused, and production-ready. Aim for Grade B (80%+) on your first version, then iterate to Grade A.
Document Version: 1.0.0 Last Updated: December 3, 2025 Estimated Read Time: 25-30 minutes File Size: ~18 KB
Maintained By: CODITECT Core Standards Team