Skip to main content

/explain - Intelligent Question Answering with Local Context

Answer questions about CODITECT, the codebase, or any topic by intelligently combining LLM reasoning with local knowledge base searches.

Usage

/explain <question or topic>

Examples

# Direct knowledge questions
/explain what is the Task Tool Pattern?
/explain how do agents work in CODITECT?
/explain what does setup.sh do?

# Questions requiring local context search
/explain what decisions were made about authentication?
/explain show me examples of API endpoint patterns
/explain what errors have been solved related to imports?

# Code understanding
/explain how does the git-sync command work?
/explain what is the component activation workflow?

How It Works

The /explain command uses intelligent intent detection to determine the best answer strategy:

Strategy 1: Direct LLM Answer

For general knowledge, concepts, or explanations that don't require local context.

Triggers: "what is", "how does X work", "explain the concept of", definitions

Strategy 2: Local Search + LLM Synthesis

For questions about prior work, decisions, patterns, or errors in this project.

Triggers: "what decisions", "show examples", "what errors", "how did we", "what was done"

Uses these /cxq modes:

  • --decisions - For architectural/design decisions
  • --patterns - For code implementation patterns
  • --errors - For error solutions
  • --recent N - For recent activity context
  • --recall "topic" - For topic-specific history

Strategy 3: File Analysis + LLM Explanation

For questions about specific files, scripts, or components.

Triggers: "what does X.py do", "explain this file", "how does script X work"

Uses: Read tool to get file contents, then explains

Intent Detection Logic

User Question


┌─────────────────────────────────────┐
│ Does question reference: │
│ - Prior decisions/history? │──Yes──▶ /cxq search + LLM synthesis
│ - Past errors/solutions? │
│ - Implementation patterns? │
│ - "What did we do about X?" │
└─────────────────────────────────────┘
│ No

┌─────────────────────────────────────┐
│ Does question reference: │
│ - Specific file/script? │──Yes──▶ Read file + LLM explanation
│ - Component by name? │
│ - "What does X.py do?" │
└─────────────────────────────────────┘
│ No

┌─────────────────────────────────────┐
│ General knowledge question │──────▶ Direct LLM answer
│ - Concepts, definitions │
│ - How things work │
│ - Best practices │
└─────────────────────────────────────┘

Output Format

For Direct Answers

## Answer

[Clear, concise explanation]

### Key Points
- Point 1
- Point 2
- Point 3

### Related Commands/Resources
- `/command` - Description
- `docs/file.md` - Description

For Search-Based Answers

## Answer

[Synthesized answer from local context]

### Evidence from Knowledge Base

**Query Used:** `/cxq --decisions --limit 10`

**Relevant Findings:**
1. [Finding 1 with context]
2. [Finding 2 with context]

### Summary
[Key takeaways from the search results]

For File Explanations

## File: `path/to/file.ext`

**Purpose:** [One-line summary]

**What It Does:**
[Detailed explanation of functionality]

**Key Functions/Sections:**
| Name | Purpose |
|------|---------|
| function1 | Does X |
| function2 | Does Y |

**Usage:**
```bash
[Example usage]

Related Files:

  • related/file.py - Connection


## System Prompt

**⚠️ EXECUTION DIRECTIVE:**
When the user invokes this command, you MUST:
1. **IMMEDIATELY execute** - no questions, no explanations first
2. **ALWAYS show full output** from script/tool execution
3. **ALWAYS provide summary** after execution completes

**DO NOT:**
- Say "I don't need to take action" - you ALWAYS execute when invoked
- Ask for confirmation unless `requires_confirmation: true` in frontmatter
- Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.

---

## Search Query Mapping

| Question Pattern | /cxq Query |
|-----------------|------------|
| "what decisions about X" | `/cxq --decisions "X"` |
| "examples of X pattern" | `/cxq --patterns "X"` |
| "errors with X" | `/cxq --errors "X"` |
| "what happened today" | `/cxq --today` |
| "recent activity on X" | `/cxq --recall "X"` |
| "last N messages" | `/cxq --recent N` |

## Implementation Notes

This command should:

1. **Parse Intent** - Analyze the question to determine strategy
2. **Execute Search** (if needed) - Run appropriate /cxq query
3. **Gather Context** (if needed) - Read relevant files
4. **Synthesize Answer** - Combine all context with LLM reasoning
5. **Format Response** - Present in clear, structured format

## Comparison to Other Commands

| Command | Purpose | When to Use |
|---------|---------|-------------|
| `/explain` | Answer questions with context | Understanding concepts, history, code |
| `/cxq` | Raw knowledge base search | Direct database queries |
| `/which` | Find right agent/command | Choosing tools for tasks |
| `/what` | Quick definitions | One-line answers |

## Tips

1. **Be specific** - "explain authentication flow" vs "explain auth"
2. **Indicate time** - "recent decisions" vs "all decisions about X"
3. **Reference files** - "explain scripts/test-suite.py" for file-specific help
4. **Ask follow-ups** - Build on previous explanations

## Success Output

When explanation completes:

✅ COMMAND COMPLETE: /explain Strategy: <direct|search|file> Question: Sources: N referenced Query: (if search) Files: N read (if file analysis)


## Completion Checklist

Before marking complete:
- [ ] Intent detected
- [ ] Strategy selected
- [ ] Search/read executed (if needed)
- [ ] Answer synthesized
- [ ] Sources cited

## Failure Indicators

This command has FAILED if:
- ❌ No question provided
- ❌ Search returned no results
- ❌ File not found (file analysis)
- ❌ Answer incomplete

## When NOT to Use

**Do NOT use when:**
- Need raw search results (use /cxq)
- Choosing tool (use /which)
- One-line answer (use /what)

## Anti-Patterns (Avoid)

| Anti-Pattern | Problem | Solution |
|--------------|---------|----------|
| Vague question | Poor results | Be specific |
| Skip context | Incomplete answer | Include relevant files |
| Ignore sources | Unverifiable | Always cite evidence |

## Principles

This command embodies:
- **#9 Based on Facts** - Evidence-based answers
- **#6 Clear, Understandable** - Structured explanations
- **#2 Search Before Create** - Find existing knowledge

**Full Standard:** [CODITECT-STANDARD-AUTOMATION.md](pathname://coditect-core-standards/CODITECT-STANDARD-AUTOMATION.md)

---

**Version:** 1.0.0
**Created:** 2025-12-22
**Author:** CODITECT Team
**Related:** /cxq, /which, /what