/moe-analyze - Multi-Agent Research with Certainty Quantification
Execute comprehensive multi-agent research using coordinated analyst subagents with explicit certainty scoring, evidence requirements, and logical inference documentation.
Usage
# Basic analysis
/moe-analyze "Analyze current DMS functional requirements for completeness"
# With specific focus
/moe-analyze --focus compliance "Review HIPAA and SOC2 alignment"
# With web search enabled
/moe-analyze --with-search "Research enterprise DMS market standards 2024-2025"
# Include judgement phase
/moe-analyze --with-judgement "Evaluate security implementation against OWASP Top 10"
# Specify certainty threshold
/moe-analyze --min-certainty 80 "Validate database schema design decisions"
System Prompt
System Prompt
⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:
- IMMEDIATELY execute - no questions, no explanations first
- ALWAYS show full output from script/tool execution
- ALWAYS provide summary after execution completes
DO NOT:
- Say "I don't need to take action" - you ALWAYS execute when invoked
- Ask for confirmation unless
requires_confirmation: truein frontmatter - Skip execution even if it seems redundant - run it anyway
The user invoking the command IS the confirmation.
You are executing the MoE Analysis Framework, coordinating multiple analyst agents to research a topic with explicit certainty scoring and evidence requirements.
Workflow
Phase 1: Analyst Dispatch (Parallel)
Dispatch 4-6 analyst agents simultaneously:
# Execute these Task calls in parallel
Task(
subagent_type="web-search-researcher",
description="Primary research gathering",
prompt="""Research the topic: {user_topic}
MANDATORY: For EACH finding, provide:
1. Certainty Score (0-100%)
2. Source URLs with reliability rating
3. Source recency (publication date)
4. Direct quotes or evidence
5. Gaps in available information
If no reliable source exists, explicitly state:
"NO SUPPORTING DATA AVAILABLE"
"""
)
Task(
subagent_type="thoughts-analyzer",
description="Internal knowledge analysis",
prompt="""Analyze existing internal documentation for: {user_topic}
Search docs/, architecture files, and prior research.
Rate certainty based on documentation quality and recency.
"""
)
Task(
subagent_type="codebase-analyzer",
description="Implementation verification",
prompt="""Verify current implementation state for: {user_topic}
Check actual code against documented requirements.
Identify discrepancies with evidence.
"""
)
Task(
subagent_type="qa-reviewer",
description="Quality validation",
prompt="""Review analysis quality for: {user_topic}
Check for logical consistency.
Identify unsupported claims.
Flag overconfident assertions.
"""
)
Phase 2: Evidence Synthesis
Aggregate and cross-validate findings:
- Collect all agent reports
- Cross-reference overlapping claims
- Identify contradictions requiring resolution
- Calculate composite certainty scores
Phase 3: Certainty Scoring
For each finding, calculate:
certainty_score = (
evidence_support * 0.40 + # How well supported by sources
source_reliability * 0.25 + # Quality of sources
internal_consistency * 0.20 + # Agreement across agents
recency * 0.15 # How current the information
)
Certainty Levels:
| Score | Level | Meaning |
|---|---|---|
| 85-100% | HIGH | Strong evidence, reliable sources, consensus |
| 60-84% | MEDIUM | Good evidence, some gaps or disagreement |
| 30-59% | LOW | Limited evidence, significant uncertainty |
| 0-29% | INFERRED | No direct evidence, logical inference only |
Phase 4: Gap Documentation
MANDATORY: Explicitly document:
- What information was NOT found
- What sources were unavailable
- What would increase certainty
- Known limitations of analysis
Phase 5: Logical Inference (when data unavailable)
When evidence is insufficient, provide:
## Inferred Conclusion: [Statement]
**Inference Type:** [Deduction/Induction/Abduction]
**Reasoning Chain:**
1. Premise: [Statement] (Certainty: X%)
Evidence: [Source or "Assumed"]
2. Premise: [Statement] (Certainty: X%)
Evidence: [Source or "Assumed"]
3. Therefore: [Conclusion]
**Decision Tree:**
IF [condition A] is true (evidence: X) AND [condition B] is true (evidence: Y) THEN [conclusion C] follows WITH confidence: Z% ELSE [alternative conclusion]
**Inference Confidence:** [0-100%]
**Key Assumptions:** [List assumptions made]
**Falsification Criteria:** [What would disprove this]
Output Format
# MoE Analysis Report: [Topic]
**Generated:** [Timestamp]
**Analysts:** [List of agents used]
**Overall Certainty:** [X%] ([LEVEL])
---
## Executive Summary
[2-3 sentence summary with key certainty caveats]
---
## Findings
### Finding 1: [Title]
**Certainty:** [X%] ([LEVEL])
**Evidence:**
1. **Source:** [URL]
- Reliability: [Peer-reviewed/Government/Industry/Blog/Unknown]
- Date: [Publication date]
- Quote: "[Relevant excerpt]"
2. **Source:** [URL]
- ...
**Cross-Validation:** [How agents agreed/disagreed]
**Gaps:** [What's missing]
---
### Finding 2: [Title]
...
---
## Information Gaps
| Gap | Impact | What Would Help |
|-----|--------|-----------------|
| [Missing info] | [HIGH/MEDIUM/LOW] | [Suggested research] |
---
## Inferred Conclusions (Low Evidence)
### Inference 1: [Statement]
**Certainty:** [X%] (INFERRED)
**Reasoning Chain:** [Step-by-step logic]
**Decision Tree:** [Visual representation]
**Key Assumptions:** [List]
---
## Confidence Summary
| Finding | Certainty | Evidence Quality | Sources |
|---------|-----------|------------------|---------|
| [F1] | X% | [Rating] | [Count] |
| [F2] | X% | [Rating] | [Count] |
**Aggregate Certainty:** [X%]
**Lowest Confidence Area:** [Topic]
**Recommended Follow-up:** [Next steps]
Options
| Option | Description |
|---|---|
--focus [area] | Narrow research to specific domain |
--with-search | Enable web search for external sources |
--with-judgement | Add MoE judge evaluation phase |
--min-certainty [N] | Require minimum certainty threshold |
--output [path] | Write report to specific file |
--agents [N] | Number of analyst agents (default: 4) |
Action Policy
<default_behavior> This command EXECUTES research and PRODUCES deliverables:
- Dispatches analyst agents in parallel
- Gathers and synthesizes evidence
- Calculates certainty scores
- Documents gaps and limitations
- Generates comprehensive report
User receives actionable analysis with clear confidence levels. </default_behavior>
Related Commands:
/moe-judge- Evaluate and grade analysis quality/multi-agent-research- Standard research workflow/smart-research- Single-agent research
Related Skills:
uncertainty-quantification- Certainty scoring patternsevaluation-framework- Quality assessment rubrics
Success Output
When moe-analyze completes:
✅ COMMAND COMPLETE: /moe-analyze
Topic: <analysis-topic>
Analysts: N dispatched
Certainty: N% (LEVEL)
Findings: N with evidence
Gaps: N documented
Completion Checklist
Before marking complete:
- Analysts dispatched
- Evidence gathered
- Certainty scored
- Gaps documented
- Report generated
Failure Indicators
This command has FAILED if:
- ❌ No topic provided
- ❌ No analysts responded
- ❌ No certainty scores
- ❌ Gaps not documented
When NOT to Use
Do NOT use when:
- Simple factual query
- Single source sufficient
- Time-critical (use /smart-research)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Skip certainty | Overconfidence | Always score certainty |
| Ignore gaps | False completeness | Document all gaps |
| Too few agents | Limited perspective | Use 4+ analysts |
Principles
This command embodies:
- #9 Based on Facts - Evidence with certainty
- #6 Clear, Understandable - Clear confidence levels
- #3 Complete Execution - Full analysis workflow
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Version: 1.0.0 Last Updated: 2025-12-19