/adr-decision - Architecture Decision Record Generator
Create Architecture Decision Records using MoE multi-agent workflow
Create Architecture Decision Records using MoE multi-agent workflow
Classify documents using the complete 4-phase Mixture of Experts (MoE) pipeline. Supports recursive directory analysis, automatic frontmatter updates, and comprehensive reporting.
Calibrate and manage MoE classification thresholds and confidence
Train MoE system from confirmed classifications and feedback
5-step session orientation that combines project discovery, context search, agent discovery, invocation creation, and MoE orchestrated execution.
Show recently modified files with optional git diff analysis and MoE review export.
Run the complete MoE UI optimization pipeline including quality gate validation, navigation optimization, and visual design refinement for enterprise-grade UI generation.
Multi-perspective verification layer using Constitutional Court pattern with judge personas, debate protocol, and multi-model evaluation
Reference documentation for the adr decision workflow skill.
Status
Status
Version: 1.0.0
Session checkpoint documenting MoE classification system implementation and remaining work
⚠️ ADR-118 Architecture Update (January 2026) org.db (Tier 2 regenerable - messages), platform.db (Tier 1: components). References to context.db below reflect the architecture at time of analysis. Integration recommendations remain valid - use org.db/sessions.db instead of context.db.
Generated: December 31, 2025
Comprehensive analysis of the MoE architecture including commands, agents, consensus algorithm, and usage patterns
Generated: December 31, 2025
Objective: Achieve 95-100% classification confidence on all documents without requiring human approval through deep semantic analysis, intelligent signal injection, iterative refinement, and multi-expert judge verification.
Deep document analysis with MoE expert judges for autonomous classification without human approval
Objective: Achieve 95-100% classification confidence on all documents without requiring human approval through deep semantic analysis, intelligent signal injection, iterative refinement, and multi-expert judge verification.
Overview
Mixture of Experts document classification with Type Expert agents, autonomous mode, and v3 enhanced frontmatter enforcement
You are the **MoE Content Classifier Agent**, a specialized AI agent for deep document classification using the Mixture of Experts system with Type Expert coordination.
Level 1: System Context Diagram
Date: December 31, 2025
Comparison of MoE classification with and without semantic embeddings
Date: December 31, 2025
Reference documentation for the moe enhancement skill.
Date: December 31, 2025
Post-implementation report for MoE classification system enhancements
Date: December 31, 2025
For AI Agents and Users: Configure the MoE Constitutional Court architecture based on your available LLM providers - from single-provider enterprise deployments to full multi-provider diversity.
Mixture-of-Experts workflow for task discovery, agent invocation creation, tasklist updates, and orchestrated execution
You are the **Reasoning Trace Specialist**, an MoE agent responsible for capturing, storing, analyzing, and querying the step-by-step decision paths that agents take during task execution. Your specia
Automated eval-improve cycle with critic agent and F1 scoring
MoE orchestrator for generating Claude Code skills from documentation, repositories, and PDFs
You are the **Token Economics Analyst**, an MoE agent responsible for tracking, analyzing, and optimizing the token economics of AI agent operations. Your specialty is cost visibility, model compariso
You are the **Tool Analytics Specialist**, an MoE agent responsible for analyzing tool call patterns, tracking success rates, and optimizing agent-tool interactions. Your specialty is understanding HO