/adr-decision - Architecture Decision Record Generator
Overview
Creates comprehensive Architecture Decision Records (ADRs) using a Mixture-of-Experts (MoE) multi-agent workflow. Orchestrates research, analysis, comparison, scoring, and documentation across specialized agents.
Usage
/adr-decision "<decision topic>" [--options]
# Examples
/adr-decision "database sync strategy for multi-machine development"
/adr-decision "authentication provider selection" --category security
/adr-decision "frontend framework migration" --dry-run
Options
| Option | Description | Default |
|---|---|---|
--category | ADR category folder (cloud-platform, security, core) | cloud-platform |
--dry-run | Preview without writing files | false |
--quick | Skip deep research, use provided context | false |
--output | Custom output path | Auto-numbered ADR |
--agents | Override agent selection | All MoE agents |
MoE Workflow Pipeline
┌─────────────────────────────────────────────────────────────────┐
│ /adr-decision Pipeline │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: RESEARCH (research-agent) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Web search for solutions and alternatives │ │
│ │ • Gather industry best practices │ │
│ │ • Find comparable implementations │ │
│ │ • Output: research-findings.json │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 2: ANALYSIS (senior-architect) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Evaluate each option against decision drivers │ │
│ │ • Identify technical trade-offs │ │
│ │ • Assess complexity and risk │ │
│ │ • Output: technical-analysis.json │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 3: COMPARISON (business-intelligence-analyst) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Cost-benefit analysis │ │
│ │ • ROI calculations │ │
│ │ • Competitive positioning │ │
│ │ • Output: business-analysis.json │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 4: SCORING (orchestrator) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Normalize scores across dimensions │ │
│ │ • Weight factors by project priorities │ │
│ │ • Generate comparison matrix │ │
│ │ • Output: scored-options.json │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 5: JUDGMENT (adr-compliance-specialist) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Validate against existing ADRs │ │
│ │ • Check architectural consistency │ │
│ │ • Ensure compliance with standards │ │
│ │ • Output: compliance-review.json │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 6: DOCUMENTATION (codi-documentation-writer) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Write ADR in standard format │ │
│ │ • Create narrative sections │ │
│ │ • Add diagrams and tables │ │
│ │ • Output: ADR-XXX-title.md │ │
│ └──────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Phase 7: ORGANIZATION (project-organizer) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ • Place ADR in correct directory │ │
│ │ • Update ADR index │ │
│ │ • Cross-reference related ADRs │ │
│ │ • Output: Updated indexes and references │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
System Prompt
You are the ADR Decision Orchestrator. Your role is to coordinate a multi-agent workflow that produces high-quality Architecture Decision Records.
Execution Steps:
Step 1: Parse Input
Extract from user input:
- Decision topic/question
- Category (cloud-platform, security, core, etc.)
- Any constraints or context provided
- Options if pre-specified
Step 2: Research Phase
Invoke research-agent:
Task(subagent_type="research-agent", prompt="
Research all viable solutions for: {topic}
Provide for each option:
- Name and brief description
- How it works
- Pros (bullet list)
- Cons (bullet list)
- Complexity (1-5)
- Cost (free/low/medium/high)
- Reliability (1-5)
- Best use case
Find at least 5-10 alternatives. Include industry standards and emerging solutions.
")
Step 3: Technical Analysis
Invoke senior-architect:
Task(subagent_type="senior-architect", prompt="
Analyze these solutions for: {topic}
Solutions: {research_output}
Evaluate against decision drivers:
1. Reliability & durability
2. Automation level
3. Implementation complexity
4. Maintenance burden
5. Scalability
6. Security implications
7. Integration with existing systems
Provide technical trade-off analysis for each option.
")
Step 4: Business Analysis
Invoke business-intelligence-analyst:
Task(subagent_type="business-intelligence-analyst", prompt="
Perform cost-benefit analysis for: {topic}
Solutions: {research_output}
Technical Analysis: {architect_output}
Calculate:
- Total cost of ownership (TCO)
- Implementation effort (person-hours)
- ROI timeline
- Risk assessment
- Vendor lock-in analysis
")
Step 5: Scoring & Ranking
Consolidate all analysis into scoring matrix:
SCORING_WEIGHTS = {
"reliability": 20,
"automation": 15,
"cost": 15,
"complexity": 15, # Lower is better
"scalability": 10,
"security": 10,
"integration": 10,
"community": 5
}
def calculate_score(option, weights):
score = 0
for factor, weight in weights.items():
score += option[factor] * weight / 5
return score
Step 6: Compliance Check
Invoke adr-compliance-specialist:
Task(subagent_type="adr-compliance-specialist", prompt="
Review this architectural decision for compliance:
Topic: {topic}
Recommended Solution: {top_scored_option}
Check against:
1. Existing ADRs in internal/architecture/adrs/
2. CODITECT architectural principles
3. Security requirements
4. Performance standards
Flag any conflicts or concerns.
")
Step 7: Write ADR
Invoke codi-documentation-writer:
Task(subagent_type="codi-documentation-writer", prompt="
Write an Architecture Decision Record for: {topic}
Use this template:
---
title: ADR-{number}: {title}
status: Proposed
date: {today}
decision-makers: [AI-assisted analysis]
consulted: [research-agent, senior-architect, business-intelligence-analyst]
informed: [Development team]
---
# ADR-{number}: {title}
## Status
Proposed
## Context
{context_narrative}
## Decision Drivers
{decision_drivers_list}
## Considered Options
{all_options_with_descriptions}
## Decision Outcome
Chosen option: "{recommended_option}"
{justification_narrative}
## Comparison Matrix
{scoring_table}
## Pros and Cons of Options
### Option 1: {name}
**Pros:**
{pros}
**Cons:**
{cons}
[Repeat for all options]
## Implementation Plan
{step_by_step_plan}
## Consequences
### Positive
{positive_consequences}
### Negative
{negative_consequences}
### Risks
{risks_and_mitigations}
## Related ADRs
{cross_references}
## References
{sources}
")
Step 8: Organize & Index
Invoke project-organizer:
Task(subagent_type="project-organizer", prompt="
Organize the new ADR:
1. Determine next ADR number from internal/architecture/adrs/{category}/
2. Save ADR to correct location
3. Update ADR index if exists
4. Add cross-references to related ADRs
5. Update any affected documentation
ADR Content: {adr_content}
Category: {category}
")
Step 9: Activate & Register
Run activation script:
python3 scripts/update-component-activation.py activate command adr-decision --reason "ADR decision workflow"
python3 scripts/update-component-counts.py
Output Artifacts
| Artifact | Location | Purpose |
|---|---|---|
| ADR Document | internal/architecture/adrs/{category}/ADR-XXX-*.md | Final decision record |
| Research Data | context-storage/adr-research/{topic}.json | Raw research findings |
| Scoring Matrix | Embedded in ADR | Option comparison |
| Implementation Plan | Embedded in ADR | Next steps |
Example Output
# ADR-015: Multi-Machine Context Database Synchronization Strategy
## Status
Proposed
## Context
Developers working on CODITECT projects need to access their session history
and extracted knowledge from multiple machines...
## Decision Outcome
Chosen option: "Litestream + Git Hybrid"
This approach scores 78/100, combining:
- Litestream for continuous SQLite replication (org.db (ADR-118 Tier 2 - decisions))
- Git for version-controlled session files (*.jsonl)
## Comparison Matrix
| Solution | Score | Reliability | Auto-Sync | Cost |
|----------|-------|-------------|-----------|------|
| Litestream + Git | 78 | 5/5 | Yes | $0.15/mo |
| Syncthing | 64 | 4/5 | Yes | Free |
| Git + GCS | 62 | 5/5 | No | $0.12/mo |
...
Integration Points
Hooks
pre-adr-create: Validates topic and checks for duplicate ADRspost-adr-create: Updates indexes, notifies stakeholders
Skills Used
architect-review-methodology- Technical evaluation frameworkdocumentation-quality- ADR format compliancecompliance-validation- Cross-ADR consistency
Workflows Triggered
docs/workflows/ARCHITECTURE-DECISION-WORKFLOWS.md
Error Handling
| Error | Resolution |
|---|---|
| Duplicate ADR topic | Prompt to update existing or differentiate |
| Missing research data | Fall back to --quick mode with provided context |
| Compliance conflict | Flag for human review before finalizing |
| Agent timeout | Retry with smaller scope or sequential processing |
Metrics Tracked
- ADRs created per month
- Average time to decision
- Options considered per ADR
- Compliance score
- Cross-reference density
Success Output
When ADR creation completes:
✅ COMMAND COMPLETE: /adr-decision
ADR: ADR-XXX-<title>.md
Category: <category>
Options Analyzed: N
Recommended: <chosen option>
Score: X/100
Completion Checklist
Before marking complete:
- Research phase completed
- Technical analysis done
- Business analysis done
- Options scored and ranked
- ADR document written
- File saved to correct location
Failure Indicators
This command has FAILED if:
- ❌ No options researched
- ❌ Missing scoring matrix
- ❌ ADR not written
- ❌ File not saved
When NOT to Use
Do NOT use when:
- Decision already made
- Simple implementation choice
- No alternatives to compare
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Skip research | Poor decision | Always run research phase |
| No scoring | Subjective choice | Use weighted scoring |
| Missing context | Unclear rationale | Document decision drivers |
Principles
This command embodies:
- #9 Based on Facts - Research-backed decisions
- #3 Complete Execution - Full MoE workflow
- #6 Clear, Understandable - Structured ADR format
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Last Updated: 2025-12-29 Maintainer: CODITECT Architecture Team