Skip to main content

/which - Dynamic Agent Discovery

Dynamically match your task to the best CODITECT agent(s) using semantic search against all 776 agents.

System Prompt

EXECUTION DIRECTIVE: When /which is invoked, you MUST:

  1. IMMEDIATELY execute the database query - no questions first
  2. Search dynamically - NEVER use static/hardcoded agent lists
  3. Return ranked results with match scores
  4. Provide WHEN guidance - explain when to use primary vs alternatives

Execution Steps

# Step 1: Extract keywords from task
TASK="<user's task description>"
KEYWORDS=$(echo "$TASK" | tr ' ' ',')

# Step 2: Load project context (ADR-156)
cd /Users/halcasteel/PROJECTS/coditect-rollout-master/submodules/core/coditect-core
source .venv/bin/activate

# Get current project ID
python3 -c "from scripts.core.paths import discover_project; print(discover_project() or 'global')"

# Query project-specific decisions and patterns
python3 scripts/context-query.py --project-stats
python3 scripts/context-query.py --decisions --limit 5 --filter-project

# Step 3: Query component database
# Primary search: semantic match on task
python3 scripts/component-indexer.py --type agent --search "$TASK"

# Secondary search: capability match
python3 scripts/component-indexer.py --type agent --capability "$KEYWORDS"

# Check for orchestrators if complex task
python3 scripts/component-indexer.py --orchestrators

Database Query (Python)

import sqlite3
import os

def find_agents_for_task(task: str, db_path: str = "context-storage/platform.db"):
conn = sqlite3.connect(db_path)
cursor = conn.cursor()

# Full-text search on agent capabilities
cursor.execute("""
SELECT c.name, c.description,
GROUP_CONCAT(cap.capability, ', ') as capabilities
FROM components c
LEFT JOIN capabilities cap ON c.id = cap.component_id
WHERE c.type = 'agent'
AND c.id IN (
SELECT rowid FROM component_search
WHERE component_search MATCH ?
)
GROUP BY c.id
ORDER BY rank
LIMIT 10
""", (task,))

return cursor.fetchall()

def get_project_context(project_id: str = None):
"""Load project-specific context for agent recommendations (ADR-156)."""
from scripts.core.paths import get_org_db_path, discover_project

project_id = project_id or discover_project() or os.environ.get('CODITECT_PROJECT')
if not project_id:
return {'project_id': None, 'decisions': [], 'patterns': []}

org_db = get_org_db_path()
conn = sqlite3.connect(str(org_db))
cursor = conn.cursor()

# Get recent project decisions
cursor.execute("""
SELECT title, rationale, created_at
FROM decisions
WHERE project_id = ? OR scope = 'global'
ORDER BY created_at DESC
LIMIT 5
""", (project_id,))
decisions = cursor.fetchall()

# Get project error patterns
cursor.execute("""
SELECT error_type, solution, success_count
FROM error_solutions
WHERE project_id = ? OR scope = 'global'
ORDER BY success_count DESC
LIMIT 5
""", (project_id,))
patterns = cursor.fetchall()

conn.close()
return {
'project_id': project_id,
'decisions': decisions,
'patterns': patterns
}

Usage

/which <task-description>

# Examples
/which deploy to kubernetes
/which review code for security vulnerabilities
/which create API documentation
/which sync git submodules
/which build a business plan
/which optimize database queries

Response Format

REQUIRED OUTPUT STRUCTURE:

┌─────────────────────────────────────────────────────────────┐
│ /which: <task> │
│ Project: <project-id or "global"> (ADR-156) │
├─────────────────────────────────────────────────────────────┤
│ PROJECT CONTEXT (if available) │
│ ─────────────── │
│ Recent Decisions: <count> decisions loaded │
│ • <decision-1 title> - <rationale summary> │
│ Error Patterns: <count> patterns loaded │
│ • <pattern-1> - resolved with <solution> │
│ │
├─────────────────────────────────────────────────────────────┤
│ PRIMARY RECOMMENDATION │
│ ────────────────────── │
│ Agent: <agent-name> │
│ Match: <score>% │
│ Health: [XX%] ↑/↓/→ │
│ Why: <1-line reason this agent is best for this task> │
│ │
│ Capabilities: │
│ • <capability 1> │
│ • <capability 2> │
│ • <capability 3> │
│ │
│ Invocation: │
│ /agent <agent-name> "<task with context>" │
│ │
├─────────────────────────────────────────────────────────────┤
│ SUPPORTING AGENTS │
│ ────────────────── │
│ 2. <agent-2> (<score>%) [Health: XX% ↑] - <why useful> │
│ 3. <agent-3> (<score>%) [Health: XX% →] - <why useful> │
│ │
├─────────────────────────────────────────────────────────────┤
│ WHEN TO USE WHICH │
│ ───────────────── │
│ • Use <primary> when: <specific scenario> │
│ • Use <agent-2> instead when: <alternative scenario> │
│ • Use <agent-3> instead when: <alternative scenario> │
│ │
│ For complex multi-step tasks, consider: │
│ /agent orchestrator "coordinate: <task>" │
└─────────────────────────────────────────────────────────────┘

Project Context Display (ADR-156):

  • Show project ID at top of output
  • Display relevant project decisions that may inform agent selection
  • Show error patterns from project history
  • If no project detected, show "global" scope

Health Score Display:

  • Green (≥75%): [85%] ↑ - Reliable, well-performing
  • Yellow (50-74%): [62%] → - Functional, room for improvement
  • Red (<50%): [42%] ↓ - Consider alternatives or improvement queued

Health Warning: If primary recommendation has health <50%, display:

⚠️ WARNING: <agent> health is low (XX%)
Consider healthier alternative: <agent-2> (YY% health)

Examples

Example 1: /which deploy to kubernetes

┌─────────────────────────────────────────────────────────────┐
│ /which: deploy to kubernetes │
├─────────────────────────────────────────────────────────────┤
│ PRIMARY RECOMMENDATION │
│ ────────────────────── │
│ Agent: k8s-statefulset-specialist │
│ Match: 94% │
│ Why: Specialized in Kubernetes deployments and GKE │
│ │
│ Capabilities: │
│ • Kubernetes StatefulSet configuration │
│ • GKE cluster management │
│ • Persistent volume orchestration │
│ • Pod lifecycle management │
│ │
│ Invocation: │
│ /agent k8s-statefulset-specialist "deploy application │
│ to kubernetes with proper resource limits and PVCs" │
│ │
├─────────────────────────────────────────────────────────────┤
│ SUPPORTING AGENTS │
│ ────────────────── │
│ 2. devops-engineer (82%) - CI/CD pipeline integration │
│ 3. cloud-architect (76%) - Infrastructure provisioning │
│ │
├─────────────────────────────────────────────────────────────┤
│ WHEN TO USE WHICH │
│ ───────────────── │
│ • Use k8s-statefulset-specialist when: K8s-specific work, │
│ StatefulSets, persistent workloads, GKE configuration │
│ • Use devops-engineer instead when: Need full CI/CD │
│ pipeline with K8s as deployment target │
│ • Use cloud-architect instead when: Need to provision │
│ the K8s cluster itself (infrastructure) │
└─────────────────────────────────────────────────────────────┘

Example 2: /which create API documentation

┌─────────────────────────────────────────────────────────────┐
│ /which: create API documentation │
├─────────────────────────────────────────────────────────────┤
│ PRIMARY RECOMMENDATION │
│ ────────────────────── │
│ Agent: codi-documentation-writer │
│ Match: 96% │
│ Why: Specialized in technical documentation and API docs │
│ │
│ Capabilities: │
│ • API reference documentation │
│ • OpenAPI/Swagger generation │
│ • Code example creation │
│ • Documentation structure and organization │
│ │
│ Invocation: │
│ /agent codi-documentation-writer "create comprehensive │
│ API documentation with examples and error codes" │
│ │
├─────────────────────────────────────────────────────────────┤
│ SUPPORTING AGENTS │
│ ────────────────── │
│ 2. software-design-document-specialist (78%) - System docs │
│ 3. qa-reviewer (71%) - Documentation quality review │
│ │
├─────────────────────────────────────────────────────────────┤
│ WHEN TO USE WHICH │
│ ───────────────── │
│ • Use codi-documentation-writer when: User-facing API │
│ docs, guides, references │
│ • Use software-design-document-specialist instead when: │
│ Internal architecture docs, SDDs, system design │
│ • Use qa-reviewer instead when: Reviewing existing docs │
│ for quality, consistency, completeness │
└─────────────────────────────────────────────────────────────┘

How It Works

  1. Project Discovery - Detect current project from working directory (ADR-156)
  2. Context Loading - Load project-specific decisions and error patterns from org.db
  3. Task Analysis - Parse task description, extract keywords and intent
  4. Semantic Search - Query component database with full-text search
  5. Capability Match - Match task requirements to agent capabilities
  6. Context-Aware Ranking - Score agents by relevance + project context alignment
  7. WHEN Analysis - Compare top candidates, explain differentiation
  8. Invocation Generation - Create ready-to-use /agent command with project context

Match Scoring

ScoreMeaning
90-100%Exact match - agent specializes in this task
75-89%Strong match - agent well-suited
60-74%Moderate match - agent can help
<60%Weak match - consider alternatives

Health-Aware Routing

When multiple agents match, health is factored into ranking:

Ranking Formula: final_score = capability_match * 0.6 + health_score * 0.4

HealthStatusIconRecommendation
≥75%Healthy🟢Prefer for critical tasks
50-74%Moderate🟡Acceptable, monitor
<50%Unhealthy🔴Warn user, suggest alternative

Querying Health:

# Get health score for agent
python3 scripts/skill-health-tracker.py --skill <agent-name> --json

Agent Categories (Auto-Generated)

These categories are queried from the database, not hardcoded:

# Get categories dynamically
python3 scripts/component-indexer.py --type agent --stats
CategoryCountExample Agents
Development~25orchestrator, senior-architect, code-reviewer
DevOps~15devops-engineer, cloud-architect, k8s-specialist
Security~10security-specialist, penetration-testing-agent
Documentation~8codi-documentation-writer, qa-reviewer
Business~12business-intelligence-analyst, market-researcher
Data~10database-architect, data-engineering-specialist
Research~8research-agent, web-search-researcher
Quality~10testing-specialist, rust-qa-specialist

Total: 776 agents (run /what agents for full list)


Fallback Behavior

No good match found:

No agent with >60% match found for: "<task>"

Suggestions:
1. Try rephrasing: /which <alternative phrasing>
2. Break down task: /which <subtask 1>, then /which <subtask 2>
3. Use orchestrator for complex multi-domain tasks:
/agent orchestrator "coordinate: <task>"
4. Search all components: /what --search "<keywords>"

Task spans multiple domains:

Task spans multiple domains. Recommended approach:

Option 1: Use orchestrator (coordinates multiple agents)
/agent orchestrator "coordinate: <task>"

Option 2: Sequential agents
1. /agent <agent-1> "<subtask-1>"
2. /agent <agent-2> "<subtask-2>"

CommandPurpose
/what agentsList all 776 agents
/what can <agent>Agent capabilities
/how use <agent>Agent usage guide
/agent <name> <task>Invoke agent

Database Schema

The /which command queries these tables:

-- Components table
SELECT name, type, description FROM components WHERE type = 'agent';

-- Capabilities table
SELECT capability FROM capabilities WHERE component_id = ?;

-- Full-text search
SELECT * FROM component_search WHERE component_search MATCH ?;

Success Output

When agent discovery completes successfully:

✅ COMMAND COMPLETE: /which
Task: "<task-description>"
Primary Agent: <agent-name> (confidence: X%)
Alternatives: N agents ranked
Invocation: /agent <name> "<task>"

Completion Checklist

Before marking complete:

  • Database queried for matching agents
  • Results ranked by relevance
  • Primary recommendation provided
  • Invocation syntax shown

Failure Indicators

This command has FAILED if:

  • ❌ Component database not indexed
  • ❌ No agents match task description
  • ❌ Query execution error
  • ❌ Empty or unclear task description

When NOT to Use

Do NOT use when:

  • You already know which agent to use
  • Task matches built-in Claude Code agent directly
  • Looking for commands/skills (use /what instead)

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Vague task descriptionPoor matchesBe specific about task
Ignoring alternativesMiss better optionsReview ranked alternatives
Skipping invocation stepNo action takenExecute recommended agent

Principles

This command embodies:

  • #4 Separation of Concerns - Single responsibility: task → agent routing
  • #8 No Assumptions Without Confirmation - Asks when no good match found
  • #9 Based on Facts, Cross-Check - Queries database, not hardcoded lists

Full Standard: CODITECT-STANDARD-AUTOMATION.md


Project Context Integration (ADR-156)

The /which command now loads project-specific context to improve agent recommendations:

Context TypeSourcePurpose
Project IDdiscover_project()Scope context queries
Decisionsorg.db:decisionsUnderstand architectural choices
Error Patternsorg.db:error_solutionsLearn from past issues
Skill Learningsorg.db:skill_learningsApply accumulated knowledge

Benefits:

  • Recommendations informed by project history
  • Agent selection considers past decisions
  • Error patterns help avoid repeated issues
  • Better context passed to invoked agents

Usage:

/which deploy API          # Uses auto-detected project context
CODITECT_PROJECT=my-proj /which deploy API # Explicit project

Version: 2.1.0 Created: 2025-12-22 Updated: 2026-02-04 Author: CODITECT Team ADR: ADR-156