Explain Skill Intelligent Question Answering
Explain Skill - Intelligent Question Answering
Purpose
Automatically determine the best strategy to answer user questions by combining:
- Direct LLM knowledge
- Local knowledge base searches (/cxq)
- File content analysis
- Agent expertise when needed
Activation
This skill activates when:
- User runs
/explain <question> - User asks "explain..." or "what is..." or "how does..."
- User needs understanding of code, concepts, or project history
Intent Classification
Pattern Matching for Intent Detection
INTENT_PATTERNS = {
"local_search": [
r"what (decisions|choices) .*(made|about)",
r"(show|give|find) .*(examples?|patterns?)",
r"what errors?.*(with|about|related)",
r"(what|how) did we",
r"(what|when) was .*(done|decided|implemented)",
r"history of",
r"previous .*(work|implementation|decision)",
r"(recent|latest|today) .*(activity|work|changes)",
],
"file_analysis": [
r"what does [\w/.-]+\.(py|sh|md|json|ts|js) do",
r"explain [\w/.-]+\.(py|sh|md|json|ts|js)",
r"how does [\w/.-]+\.(py|sh|md|json|ts|js) work",
r"analyze (the )?(file|script|code)",
],
"direct_answer": [
r"what is (a |the )?[\w\s-]+\??$",
r"how (do|does|to) [\w\s]+\??$",
r"explain (the )?(concept|idea|pattern) of",
r"(define|definition of)",
r"best practices? for",
]
}
Query Mapping
Automatic /cxq Query Selection
QUERY_MAPPING = {
# Decision-related questions
"decision": {
"patterns": ["decision", "chose", "choice", "why did", "architectural"],
"query": "/cxq --decisions --limit 20",
"query_with_topic": "/cxq --decisions \"{topic}\" --limit 20"
},
# Pattern/example questions
"pattern": {
"patterns": ["example", "pattern", "implementation", "how to implement"],
"query": "/cxq --patterns --limit 20",
"query_with_topic": "/cxq --patterns \"{topic}\" --limit 20"
},
# Error/solution questions
"error": {
"patterns": ["error", "bug", "fix", "solved", "issue", "problem"],
"query": "/cxq --errors --limit 20",
"query_with_topic": "/cxq --errors \"{topic}\" --limit 20"
},
# Recent activity questions
"recent": {
"patterns": ["recent", "today", "yesterday", "this week", "latest"],
"query": "/cxq --recent 100",
"query_with_topic": "/cxq --recall \"{topic}\""
},
# General topic recall
"recall": {
"patterns": ["about", "regarding", "related to", "concerning"],
"query_with_topic": "/cxq --recall \"{topic}\" --limit 50"
}
}
Execution Flow
┌─────────────────────────────────────────────────────────────────┐
│ /explain <question> │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 1. INTENT CLASSIFICATION │
│ │
│ Analyze question against INTENT_PATTERNS │
│ Extract: intent_type, topic, file_path (if any) │
└─────────────────────────────────────────────────────────────────┘
│
┌────────────────┼────────────────┐
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ LOCAL_SEARCH │ │ FILE_ANALYSIS │ │ DIRECT_ANSWER │
└──────────────────┘ └──────────────────┘ └──────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ 2a. Build Query │ │ 2b. Read File │ │ 2c. Use LLM │
│ - Map to /cxq │ │ - Get contents │ │ knowledge │
│ - Execute search │ │ - Parse purpose │ │ directly │
└──────────────────┘ └──────────────────┘ └──────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ 3. CONTEXT AGGREGATION │
│ │
│ Combine all gathered context: │
│ - Search results (if any) │
│ - File contents (if any) │
│ - LLM knowledge │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 4. ANSWER SYNTHESIS │
│ │
│ Generate comprehensive answer with: │
│ - Clear explanation │
│ - Evidence/sources cited │
│ - Related resources │
│ - Next steps (if applicable) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 5. FORMATTED OUTPUT │
│ │
│ ## Answer │
│ [Main explanation] │
│ │
│ ### Evidence (if search was used) │
│ [Citations from knowledge base] │
│ │
│ ### Related │
│ [Commands, files, docs to explore] │
└─────────────────────────────────────────────────────────────────┘
Implementation Guidance
When this skill is activated, follow this process:
Step 1: Classify Intent
Analyze the question:
- Does it reference past work/decisions? → LOCAL_SEARCH
- Does it mention a specific file? → FILE_ANALYSIS
- Is it a general concept question? → DIRECT_ANSWER
Step 2: Gather Context
For LOCAL_SEARCH:
# Determine the right /cxq query
# Examples:
/cxq --decisions "authentication" # For decision questions
/cxq --patterns --language python # For code pattern questions
/cxq --errors "import" # For error questions
/cxq --recall "git sync" # For topic recall
For FILE_ANALYSIS:
# Read the file
Read tool: path/to/file.ext
# Also check for related files
Glob: related patterns
For DIRECT_ANSWER:
Use LLM knowledge directly, but check if local docs exist:
- docs/guides/*.md
- docs/reference/*.md
- commands/*.md
- agents/*.md
Step 3: Synthesize Answer
Combine all context into a coherent answer:
- Lead with the direct answer
- Support with evidence from searches
- Include code examples if relevant
- Suggest related resources
Example Executions
Example 1: Decision Question
Input: /explain what decisions were made about the database?
Process:
- Intent: LOCAL_SEARCH (contains "decisions")
- Query:
/cxq --decisions "database" --limit 20 - Synthesize: Summarize database-related decisions found
Output:
## Database Decisions
Based on project history, the following decisions were made:
### 1. SQLite for Context Storage
- **When:** November 2025
- **Reason:** Lightweight, file-based, no server needed
- **Tradeoff:** Limited concurrent access
### 2. FoundationDB for Production
- **When:** December 2025
- **Reason:** Distributed, ACID, scalable
- **Status:** Planned for Phase 4
### Evidence
Query: `/cxq --decisions "database" --limit 20`
Found 8 relevant entries spanning Nov-Dec 2025.
### Related
- `internal/architecture/ADR-003-DATABASE-STRATEGY.md`
- `/agent database-architect` for DB tasks
Example 2: File Explanation
Input: /explain what does scripts/test-suite.py do?
Process:
- Intent: FILE_ANALYSIS (contains file path)
- Action: Read
scripts/test-suite.py - Synthesize: Explain purpose, structure, usage
Output:
## File: `scripts/test-suite.py`
**Purpose:** Comprehensive test runner for all CODITECT components.
**What It Does:**
1. Discovers all test files (test_*.py)
2. Runs pytest with coverage reporting
3. Validates component integrity
4. Generates test reports
**Key Functions:**
| Function | Purpose |
|----------|---------|
| `run_all_tests()` | Execute full test suite |
| `run_category()` | Run tests for specific category |
| `generate_report()` | Create coverage report |
**Usage:**
```bash
python3 scripts/test-suite.py # All tests
python3 scripts/test-suite.py --agents # Agent tests only
python3 scripts/test-suite.py --report # With HTML report
Related:
pytest.ini- pytest configurationtests/- test files directory
### Example 3: Direct Answer
**Input:** `/explain what is the Task Tool Pattern?`
**Process:**
1. Intent: DIRECT_ANSWER (concept question)
2. Action: Use LLM knowledge + check docs
3. Synthesize: Clear explanation with examples
**Output:**
```markdown
## The Task Tool Pattern
The Task Tool Pattern is the verified method for invoking specialized agents in CODITECT.
**Syntax:**
```python
Task(
subagent_type="agent-name",
prompt="Detailed description of the task"
)
Key Points:
subagent_typespecifies which agent to invokepromptprovides the task instructions- For CODITECT agents, use
general-purposeas proxy
Example:
Task(
subagent_type="orchestrator",
prompt="Coordinate complete feature development for user authentication"
)
Related
docs/getting-started/USER-quick-start.md- Quick start guide/agentcommand - Universal agent invokerCLAUDE.md- Agent invocation section
## Error Handling
If search returns no results:
```markdown
## Answer
I couldn't find specific information about "{topic}" in the knowledge base.
**Attempted:** `/cxq --recall "{topic}"`
**Results:** 0 matches
**Suggestions:**
1. Try a broader search term
2. Check if the topic uses different terminology
3. The information may not be indexed yet - run `/cx` to capture recent work
Integration Points
- Uses: /cxq, Read tool, Glob tool
- Complements: /which (for tool selection), /what (for quick definitions)
- Invokes: Specialized agents when deep expertise needed
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: explain
Completed:
- [x] Question classified (LOCAL_SEARCH, FILE_ANALYSIS, or DIRECT_ANSWER)
- [x] Appropriate context gathered (/cxq queries, file reads, LLM knowledge)
- [x] Comprehensive answer synthesized with evidence
- [x] Related resources identified and provided
- [x] Answer formatted with proper sections
Answer Quality:
- Intent classification: CORRECT
- Context sources: X sources (knowledge base: Y, files: Z, LLM: direct)
- Answer completeness: COMPREHENSIVE
- Evidence citations: X citations provided
- Related resources: Y links/commands
Outputs:
- Formatted answer with sections (Answer, Evidence, Related)
- Source citations (if search used)
- Next steps or related resources
- Context queries executed (if applicable)
Completion Checklist
Before marking this skill as complete, verify:
- Question intent correctly classified (LOCAL_SEARCH, FILE_ANALYSIS, DIRECT_ANSWER)
- Appropriate /cxq query executed (if LOCAL_SEARCH)
- Correct file(s) read and analyzed (if FILE_ANALYSIS)
- Answer addresses the question directly and comprehensively
- Evidence cited from knowledge base searches (if applicable)
- Related resources provided (commands, files, docs)
- Answer formatted with clear sections (## Answer, ### Evidence, ### Related)
- No assumptions made without noting them explicitly
- File paths provided are absolute (not relative)
- Commands provided are runnable (verified syntax)
Failure Indicators
This skill has FAILED if:
- ❌ Intent misclassified (wrong query type selected)
- ❌ /cxq query returned 0 results when information exists
- ❌ File read for wrong file or file not found
- ❌ Answer doesn't address the question asked
- ❌ No evidence provided when search was used
- ❌ Incorrect or outdated information provided
- ❌ Broken file paths or invalid commands suggested
- ❌ Answer too generic (could apply to any question)
- ❌ Missing context from available knowledge base
- ❌ Circular references (answer refers to the question itself)
When NOT to Use
Do NOT use this skill when:
- User wants to execute an action (use appropriate action skill/command)
- Simple factual lookup available in recent messages (just answer directly)
- Question is a command invocation (execute the command instead)
- User is debugging code (use
debugging-patternsskill) - User wants to create new content (use
content-generationskill) - Question is rhetorical or conversational (no skill needed)
- User wants comparison/analysis of options (use
decision-analysisskill) - Procedural walkthrough needed (use
workflow-executionskill)
Use alternatives:
- Action execution → Execute command/skill directly
- Debugging →
debugging-patternsskill - Content creation →
content-generationskill - Decision analysis →
decision-analysisskill - Workflows →
workflow-executionskill
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Generic answers without context search | Misses local knowledge, inaccurate | Always check /cxq for decisions, patterns, errors |
| Reading files without verifying path | File not found errors | Use Glob to verify file exists first |
| Quoting search results verbatim | Not synthesized, hard to understand | Summarize and synthesize from multiple sources |
| No evidence section | User can't verify answer | Always cite sources when using /cxq |
| Relative file paths in answers | Paths don't work from user location | Provide absolute paths only |
| Not extracting topic from question | Generic search, poor results | Parse question for key topics before /cxq |
| Using wrong /cxq flag | Inefficient search, missed results | Match flag to intent (--decisions, --patterns, --errors, --recall) |
| Answering without checking recent context | Repeats work, wastes tokens | Use /cxq --recent first for ongoing topics |
| No related resources | User doesn't know next steps | Always provide commands, files, or docs to explore |
| Over-explaining simple questions | Token waste, frustrating | Match answer depth to question complexity |
Principles
This skill embodies:
- #1 Recycle → Extend → Re-Use → Create - Reuses knowledge base before generating new answers
- #5 Eliminate Ambiguity - Intent classification removes question ambiguity
- #6 Clear, Understandable, Explainable - Formatted answers with evidence and reasoning
- #8 No Assumptions - Searches knowledge base, doesn't assume user context
- #10 Research When in Doubt - Uses /cxq to research local knowledge before answering
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Version: 1.0.0 Created: 2025-12-22 Category: Knowledge Management