Skip to main content

/how - Task Guidance

Learn HOW to accomplish tasks, HOW components work, and WHY certain approaches are recommended.

System Prompt

EXECUTION DIRECTIVE: When /how is invoked, you MUST:

  1. IMMEDIATELY search for relevant guides, ADRs, and workflows - no questions first
  2. Query database for related components and their documentation
  3. Provide step-by-step actionable guidance
  4. Explain WHY - rationale for each step, not just WHAT to do
  5. Link to sources - reference ADRs, standards, and components

Execution Steps

cd /Users/halcasteel/PROJECTS/coditect-rollout-master/submodules/core/coditect-core
source .venv/bin/activate

# Search for guides and workflows
python3 scripts/component-indexer.py --search "<task keywords>"

# Find relevant ADRs
grep -r "<keywords>" internal/architecture/adrs/ --include="*.md" -l

# Find relevant standards
grep -r "<keywords>" coditect-core-standards/ --include="*.md" -l

# Find related skills
python3 scripts/component-indexer.py --type skill --search "<task>"

Required Tools

ToolPurposeRequired
GrepSearch ADRs, standards, and guidesYes
GlobFind related documentation filesYes
ReadAccess guide content for displayYes
BashRun component-indexer for searchOptional

Database Access:

  • Component index: context-storage/components.db
  • Python venv: .venv/bin/activate

Usage

# Task guidance (HOW to do X)
/how create agent # Step-by-step guide
/how sync submodules # Git sync workflow
/how deploy to production # Deployment steps
/how classify documents # MoE classification

# Component explanation (HOW does X work, WHY this way)
/how does MoE classification # Mechanism explanation
/how does /component-create # Command internals
/how does orchestrator work # Agent workings

# Best practices (HOW best to do X)
/how best error-handling # Error handling patterns
/how best naming-conventions # Naming standards
/how best create components # Component creation

Response Format

For /how to <task>

┌─────────────────────────────────────────────────────────────┐
│ HOW TO: Create an Agent │
├─────────────────────────────────────────────────────────────┤
│ PREREQUISITES │
│ ────────────── │
│ • Understand the agent's purpose and scope │
│ • Check existing agents: /what agents --search "similar" │
│ • Review standard: CODITECT-STANDARD-AGENTS.md │
│ │
│ WHY: Following "Search Before Create" principle prevents │
│ duplication and ensures consistency. │
├─────────────────────────────────────────────────────────────┤
│ STEPS │
│ ───── │
│ 1. SEARCH for existing similar agents │
│ ```bash │
│ /what agents --search "my-capability" │
│ ``` │
│ WHY: Extend existing > create new (Principle #1) │
│ │
│ 2. CREATE using component-create │
│ ```bash │
│ ./scripts/component-create.sh agent my-agent "Desc" │
│ ``` │
│ WHY: Auto-registers, activates, classifies │
│ │
│ 3. EDIT the generated template │
│ • Set tools (Read, Write, Bash, etc.) │
│ • Set model preference (sonnet, opus, haiku) │
│ • Define capabilities and triggers │
│ • Add system prompt │
│ WHY: Tools define what agent CAN do │
│ │
│ 4. VERIFY registration │
│ ```bash │
│ python3 scripts/component-indexer.py --search "my-agent"│
│ ``` │
│ WHY: Ensures discoverability via /which │
│ │
│ 5. TEST the agent │
│ ```bash │
│ /agent my-agent "test task" │
│ ``` │
├─────────────────────────────────────────────────────────────┤
│ REFERENCES │
│ ────────── │
│ • Standard: coditect-core-standards/CODITECT-STANDARD- │
│ AGENTS.md │
│ • ADR: ADR-049-component-creation-lifecycle.md │
│ • Script: scripts/component-create.sh │
│ • Command: /component-create │
├─────────────────────────────────────────────────────────────┤
│ COMMON MISTAKES │
│ ─────────────── │
│ ❌ Creating without searching first │
│ ❌ Not setting appropriate tools │
│ ❌ Missing system prompt │
│ ❌ Not verifying registration │
└─────────────────────────────────────────────────────────────┘

For /how does <component>

┌─────────────────────────────────────────────────────────────┐
│ HOW DOES: MoE Classification │
├─────────────────────────────────────────────────────────────┤
│ OVERVIEW │
│ ──────── │
│ MoE (Mixture of Experts) classification uses multiple │
│ specialized "judges" to classify documents by type, │
│ achieving higher accuracy than single-classifier systems. │
├─────────────────────────────────────────────────────────────┤
│ MECHANISM │
│ ───────── │
│ 1. DOCUMENT ANALYSIS │
│ • Reads file content and structure │
│ • Extracts existing frontmatter │
│ • Generates embeddings via sentence-transformers │
│ WHY: Semantic understanding > keyword matching │
│ │
│ 2. JUDGE EVALUATION (5 specialists) │
│ • Content Judge - semantic analysis of text │
│ • Directory Judge - path/location context │
│ • Frontmatter Judge - existing metadata │
│ • Historical Judge - similar classified docs │
│ • Keyword Judge - pattern matching │
│ WHY: Multiple perspectives reduce bias │
│ │
│ 3. CONSENSUS & CONFIDENCE │
│ • Each judge votes on document type │
│ • Weighted average produces confidence score │
│ • >85% = auto-approved │
│ • <85% = human review suggested │
│ WHY: Confidence thresholds prevent misclassification │
│ │
│ 4. FRONTMATTER UPDATE │
│ • Adds: type, moe_confidence, moe_classified │
│ • Preserves existing metadata │
│ WHY: Non-destructive enhancement │
├─────────────────────────────────────────────────────────────┤
│ WHY THIS APPROACH │
│ ───────────────── │
│ • Ensemble methods outperform single classifiers │
│ • Specialized judges catch domain-specific patterns │
│ • Confidence scores enable quality gates │
│ • Historical learning improves over time │
├─────────────────────────────────────────────────────────────┤
│ INTERNALS │
│ ───────── │
│ Script: scripts/moe_classifier/classify.py │
│ Config: scripts/moe_classifier/config/ │
│ Judges: scripts/moe_classifier/judges/ │
│ Model: sentence-transformers (all-MiniLM-L6-v2) │
│ Database: context-storage/platform.db (ADR-118 Tier 1) │
└─────────────────────────────────────────────────────────────┘

For /how best <topic>

┌─────────────────────────────────────────────────────────────┐
│ HOW BEST: Error Handling │
├─────────────────────────────────────────────────────────────┤
│ PRINCIPLES │
│ ────────── │
│ 1. Fail fast with clear messages │
│ 2. Recover automatically when possible │
│ 3. Never swallow errors silently │
│ 4. Provide actionable guidance │
├─────────────────────────────────────────────────────────────┤
│ PATTERNS │
│ ──────── │
│ RECOVERABLE ERRORS (auto-retry) │
│ ```bash │
│ if ! download_file; then │
│ warning "Download failed, retrying..." │
│ sleep 2 │
│ download_file || error "Download failed after retry" │
│ fi │
│ ``` │
│ WHY: Network issues are often transient │
│ │
│ NON-RECOVERABLE ERRORS (fail fast) │
│ ```bash │
│ if ! install_required_tool; then │
│ error "Cannot install required_tool. Manual install:" │
│ error " brew install required_tool" │
│ exit 1 │
│ fi │
│ ``` │
│ WHY: Clear guidance > silent failure │
├─────────────────────────────────────────────────────────────┤
│ ANTI-PATTERNS │
│ ───────────── │
│ ❌ || true (swallows all errors) │
│ ❌ Empty catch blocks │
│ ❌ Generic "An error occurred" messages │
│ ❌ Exit without explanation │
├─────────────────────────────────────────────────────────────┤
│ REFERENCE │
│ ───────── │
│ Standard: CODITECT-STANDARD-AUTOMATION.md │
│ Section: Error Handling │
└─────────────────────────────────────────────────────────────┘

Common How-To Topics

TopicCommandSource
Create agent/how create agentADR-049, STANDARD-AGENTS
Create command/how create commandADR-049, STANDARD-COMMANDS
Sync submodules/how sync submodulesADR-050, git-sync workflow
Classify documents/how classifyMoE skill, /classify command
Deploy to GCP/how deploy gcpCloud workflows
Use orchestrator/how use orchestratororchestrator agent
Query context/how query context/cxq command
Create workflow/how create workflowSTANDARD-WORKFLOWS

Comparison: /how vs /which vs /what

CommandQuestionAnswer Type
/whatWhat exists?Inventory, capabilities, location
/whichWhich tool for task?Recommendations, rankings
/howHow to do X? Why?Steps, mechanisms, rationale

CommandPurpose
/what <component>Component details
/which <task>Find best tool
/why <approach>Alias for /how does <approach>

Success Output

When guidance is provided successfully:

✅ COMMAND COMPLETE: /how
Topic: "<task-or-component>"
Steps: N actionable steps provided
Sources: ADRs, standards, guides referenced
Rationale: WHY explained for each step

Output Validation

Before completing, verify output contains:

  • Box-formatted response with proper sections
  • PREREQUISITES section (for /how to)
  • STEPS section with numbered actions
  • WHY explanation for each step
  • REFERENCES section with ADR/standard links
  • COMMON MISTAKES section (for /how to)
  • Code examples where applicable

Completion Checklist

Before marking complete:

  • Relevant guides/ADRs found
  • Steps provided in order
  • WHY explained for each step
  • Sources referenced

Failure Indicators

This command has FAILED if:

  • ❌ No relevant guides found
  • ❌ Topic too vague to search
  • ❌ Database query error
  • ❌ No actionable steps provided

When NOT to Use

Do NOT use when:

  • Need quick invocation (use /which + /agent)
  • Looking for component list (use /what)
  • Task is already well-understood

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Skip WHY sectionBlindly follow stepsAlways understand rationale
Ignore prerequisitesSkip required setupComplete prereqs first
Generic guidanceDoesn't applyBe specific in query

Principles

This command embodies:

  • #2 First Principles Approach - Explains WHY, not just WHAT
  • #7 No Action Without Understanding - Requires context before guidance

Full Standard: CODITECT-STANDARD-AUTOMATION.md


Version: 2.0.0 Created: 2026-01-03 Updated: 2026-01-03 Author: CODITECT Team