Skip to main content

Suggest Agent - Smart Invocation Generator

Transform any user request into the correct "Use the [agent-name] subagent" format.

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

/suggest-agent [describe what you want to do]

System Behavior

When this command is used, Claude will:

  1. Analyze your request to understand the task domain and complexity
  2. Select the best agent(s) from our multi-agent framework
  3. Generate the exact invocation syntax you should use
  4. Provide a brief explanation of why this agent is optimal

Quick Agent Reference

🔍 Research & Analysis

  • competitive-market-analyst - Market research, pricing, competitor analysis
  • web-search-researcher - External information gathering, documentation research
  • thoughts-analyzer - Analyze existing research documents
  • codebase-analyzer - Understand HOW code works
  • codebase-locator - Find WHERE code/files are located

🛠️ Development

  • rust-expert-developer - Rust backend development
  • frontend-react-typescript-expert - React/TypeScript UI development
  • orchestrator - Complex multi-step workflows, full-stack features

💾 Database & Infrastructure

  • foundationdb-expert - FoundationDB schema design
  • cloud-architect - Deployment, CI/CD, infrastructure

🔍 Quality & Testing

  • testing-specialist - Test coverage, TDD, quality gates
  • security-specialist - Security audits, vulnerability assessment
  • qa-reviewer - Documentation quality review

Example Outputs

Input: "I want to research Cursor's pricing strategy"

Output:

Use the competitive-market-analyst subagent to research Cursor's pricing strategy, analyze their tier structure, and identify market positioning approaches

Why: Market research specialist with web search capabilities

Output:

Use the codebase-locator subagent to find all authentication-related files including handlers, middleware, and configuration

Why: Specialized in file discovery and code location

Input: "Implement user profile editing feature"

Output:

Use the orchestrator subagent to implement user profile editing with backend API endpoints, frontend React components, and comprehensive testing

Why: Complex full-stack feature requiring coordination of multiple specialists

Input: "Understand how session management works"

Output:

Use the codebase-analyzer subagent to understand session management implementation, data flow, and authentication patterns

Why: Specialized in analyzing existing code implementation

Output:

Use the competitive-market-analyst subagent to research AI IDE market trends while having the codebase-analyzer subagent review our current feature implementation

Why: Multi-agent coordination for external research + internal analysis

Command Integration

This works with your existing workflow:

# Step 1: Get the right invocation
/suggest-agent "optimize database performance"

# Step 2: Use the suggested output directly
"Use the foundationdb-expert subagent to analyze and optimize database performance including query patterns and schema efficiency"

# Step 3: Agent executes with proper specialization

Pattern Recognition

The system recognizes these request patterns:

  • "Research [topic]"competitive-market-analyst
  • "Find [files/code]"codebase-locator
  • "Understand [implementation]"codebase-analyzer
  • "Implement [feature]" → Domain specialist + orchestrator
  • "Fix [bug]" → Relevant domain specialist
  • "Review [code/docs]"qa-reviewer or domain specialist
  • "Design [architecture]"senior-architect
  • "Analyze [existing work]"thoughts-analyzer

Multi-Agent Patterns

For complex requests, generates coordinated invocations:

# Parallel execution
"Use the [agent-1] subagent to [task-1] while having the [agent-2] subagent [task-2]"

# Sequential with handoff
"Use the [agent-1] subagent to [task-1], then use the [agent-2] subagent to [task-2]"

# Orchestrated workflow
"Use the orchestrator subagent to [coordinate complex multi-step task]"

Tips for Best Results

  1. Be specific about what you want to accomplish
  2. Mention the domain if it's not obvious (backend, frontend, research, etc.)
  3. Indicate complexity if it's a multi-step process
  4. Specify output format if you have preferences

Error Prevention

The system will:

  • ✅ Always provide valid agent names from the multi-agent framework
  • ✅ Generate syntactically correct invocation format
  • ✅ Match task complexity to appropriate agent(s)
  • ✅ Suggest alternatives if multiple approaches are viable
  • ✅ Escalate to orchestrator for unclear or complex requests

Action Policy

<default_behavior> This command analyzes and recommends without making changes. Provides:

  • Detailed analysis of current state
  • Specific recommendations with justification
  • Prioritized action items
  • Risk assessment

User decides which recommendations to implement. </default_behavior>

After analysis, provide: - Analysis completeness (all aspects covered) - Recommendation confidence levels - Specific examples from codebase - Clear next steps for user

Success Output

When agent suggestion completes:

✅ COMMAND COMPLETE: /suggest-agent
Request: <user-description>
Agent: <recommended-agent>
Invocation: "Use the [agent] subagent to [task]"
Confidence: High|Medium
Alternative: <backup-agent> (if applicable)

Completion Checklist

Before marking complete:

  • Request analyzed
  • Agent selected
  • Invocation formatted
  • Explanation provided
  • Alternatives suggested (if applicable)

Failure Indicators

This command has FAILED if:

  • ❌ No agent suggested
  • ❌ Invalid agent name
  • ❌ Invocation not formatted
  • ❌ Request not understood

When NOT to Use

Do NOT use when:

  • Know the agent already
  • Simple task (just do it)
  • Need agent list (use /which)

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Vague requestWrong agentBe specific
Skip explanationDon't understand whyRead the reasoning
Ignore alternativesMiss better optionConsider all suggestions

Principles

This command embodies:

  • #6 Clear, Understandable - Formatted invocation
  • #1 Recycle → Extend - Uses existing agents

Full Standard: CODITECT-STANDARD-AUTOMATION.md