Skip to main content

Component Improvement System

Overview

A systematic approach to continuously improve CODITECT framework components (agents, skills, commands, hooks) based on usage patterns, quality metrics, and retrospective feedback.

When to Use This Skill

Use when:

  • Retrospective shows low skill scores (<70%)
  • Components have low utilization in context database
  • Preparing for a major release
  • Periodic maintenance (weekly/monthly)
  • After adding SKILL-QUALITY-STANDARD.md requirements

Do NOT use when:

  • Active development sprint in progress
  • Components are being actively created/modified
  • Immediate deadline pressure (defer to next cycle)

Prerequisites

Before running improvement cycle:

  • Retrospective data available (context-storage/skill-learnings.json)
  • Component database indexed (context-storage/context.db)
  • Python environment activated (source .venv/bin/activate)

MoE Integration

This skill uses Mixture-of-Experts (MoE) for accurate assessment:

MoE Agents Used

AgentRolePurpose
moe-content-classifierClassifierCategorize component quality
component-analyzerAnalyzerDeep quality assessment
component-enhancerEnhancerApply improvements

MoE Judges Used

JudgeRoleWeight
quality-judgeAssess structure30%
completeness-judgeCheck sections30%
standards-judgeVerify compliance25%
usability-judgeEvaluate clarity15%

Invoking MoE Assessment

# Full MoE assessment of a component
/moe-analyze components/how.md --judges quality,completeness,standards

# Batch MoE analysis
python3 scripts/moe_classifier/classify.py commands/ --recursive --judges all

The 5-Phase Improvement Cycle

Phase 1: Discovery - Identify Components Needing Improvement

# Get skill scores from retrospective
python3 hooks/session-retrospective.py --analyze-skills

# Get component utilization stats
python3 scripts/component-indexer.py --stats

# Find low-scoring components
python3 scripts/skill-pattern-analyzer.py --recommendations

# MoE-enhanced discovery
/moe-agents --analyze-quality --threshold 0.7

Output: List of components prioritized by improvement need (MoE-weighted)

Phase 2: Analysis - Deep Dive into Problem Components

# For each low-scoring component:
python3 scripts/skill-selector.py --type skill "<component-name>"

# Check against quality standard
cat coditect-core-standards/SKILL-QUALITY-STANDARD.md

# Identify missing sections:
# - Completion checklist?
# - Success output markers?
# - When NOT to use section?
# - Anti-patterns documented?

Output: Gap analysis for each component

Phase 3: Enhancement - Apply Quality Standard

For each component needing improvement:

  1. Add Success Markers
## Success Output

When successful, output:
\`\`\`
✅ SKILL COMPLETE: <skill-name>
- [x] Step 1 verified
- [x] Step 2 completed
- [x] Output validated
\`\`\`
  1. Add Completion Checklist
## Completion Checklist

Before marking complete:
- [ ] All steps executed
- [ ] Output verified
- [ ] Tests passed (if applicable)
  1. Add When NOT to Use
**Do NOT use when:**
- <anti-condition 1>
- <anti-condition 2> (use [alternative] instead)
  1. Add Anti-Patterns
## Anti-Patterns (Avoid)

| Anti-Pattern | Problem | Solution |
|--------------|---------|----------|
| Pattern 1 | Issue | Fix |

Phase 4: Validation - Verify Improvements

# Re-classify improved components
python3 scripts/moe_classifier/classify.py <component-path> --update-frontmatter

# Re-index in database
python3 scripts/component-indexer.py

# Update component counts
python3 scripts/update-component-counts.py

# Verify registration
python3 scripts/core/ensure_component_registered.py --verify <component-name>

Phase 5: Reporting - Document Results

# Generate improvement report
python3 scripts/component-improvement-report.py

# Update skill learnings
python3 hooks/session-retrospective.py --update-learnings

Output: Improvement report with before/after metrics

Quick Reference Commands

# Full improvement cycle
/improve-components

# Analyze specific component
/improve-components --analyze <component-name>

# Enhance specific component
/improve-components --enhance <component-name>

# Generate report only
/improve-components --report

Improvement Metrics

MetricTargetMeasurement
Success Rate>70%From retrospective
Completion100% sectionsQuality standard checklist
Discoverability>0.85 confidenceFrom MoE classifier
Utilization>10 invocations/weekFrom skill history

Component Priority Matrix

PriorityCriteriaAction
P0 Critical<20% success rateImmediate revision
P1 High<50% success rateNext improvement cycle
P2 Medium<70% success rateScheduled improvement
P3 Low>70% success rateMonitor only

Automation Schedule

FrequencyActionTrigger
Per sessionRetrospective analysisSession end hook
DailyLow-score alertsScheduled job
WeeklyImprovement cycleManual or cron
MonthlyFull auditManual review

Success Output

When improvement cycle completes successfully:

✅ IMPROVEMENT COMPLETE: component-improvement

Cycle Summary:
- Components analyzed: 25
- Components improved: 8
- Average score change: +15%

Improved Components:
- [x] how.md: 49% → 72%
- [x] classify.md: 48% → 71%
- [x] work-next.md: 49% → 68%
- [x] agent.md: 57% → 75%

Remaining P0 Components: 0
Remaining P1 Components: 3

Next cycle recommended: 7 days

Failure Indicators

This skill has FAILED if:

  • ❌ No retrospective data available
  • ❌ Component database not indexed
  • ❌ Unable to read/write component files
  • ❌ No components identified for improvement

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Improving during sprintCauses merge conflictsWait for quiet period
Bulk changes without testingMay break functionalityImprove incrementally
Ignoring retrospective dataSubjective improvementsAlways use metrics
Skipping validationBroken componentsAlways re-classify and test

Success Output

When improvement cycle completes successfully:

✅ SKILL COMPLETE: component-improvement

Cycle Summary:
- Components analyzed: 25
- Components improved: 8
- Average score change: +15%
- MoE confidence increase: +0.12

Improved Components:
- [x] how.md: 49% → 72% (success rate)
- [x] classify.md: 48% → 71% (success rate)
- [x] work-next.md: 49% → 68% (success rate)
- [x] agent.md: 57% → 75% (success rate)

Quality Improvements:
- [x] Success output markers added to 8 components
- [x] Completion checklists added to 8 components
- [x] "When NOT to use" sections added to 8 components
- [x] Anti-patterns documented in 8 components

Remaining P0 Components: 0
Remaining P1 Components: 3

Next cycle recommended: 7 days

Completion Checklist

Before marking complete, verify:

  • Discovery phase completed (low-scoring components identified)
  • Analysis phase completed (gap analysis for each component)
  • Enhancement phase completed (quality sections added)
  • Validation phase completed (re-classification, re-indexing)
  • Reporting phase completed (improvement report generated)
  • All P0 components improved (>70% success rate)
  • Component database updated with new metadata
  • Skill learnings updated with improvement patterns

Failure Indicators

This skill has FAILED if:

  • ❌ No retrospective data available (skill-learnings.json missing)
  • ❌ Component database not indexed (context.db missing or outdated)
  • ❌ Unable to read/write component files (permission errors)
  • ❌ No components identified for improvement (nothing below 70%)
  • ❌ Re-classification fails after improvements
  • ❌ Improvement report not generated

When NOT to Use

Do NOT use this skill when:

  • Active development sprint in progress (causes merge conflicts)
  • Components are being actively created/modified (wait for stabilization)
  • Immediate deadline pressure (defer to next improvement cycle)
  • No retrospective data collected yet (need at least 1 week of data)
  • During major refactoring (coordinate timing)
  • Pre-release freeze period (stability priority)

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Improving during active sprintMerge conflictsWait for quiet period between sprints
Bulk changes without testingBroken componentsImprove incrementally, test each
Ignoring retrospective dataSubjective improvementsAlways use metrics to prioritize
Skipping validation phaseBroken components deployedAlways re-classify and re-index
Improving all components at onceReview fatigueFocus on P0/P1, batch by priority
Not tracking before/after metricsNo ROI visibilityDocument score changes
Skipping MoE assessmentLow confidence improvementsUse MoE judges for quality validation

Principles

This skill embodies:

  • #1 Recycle, Extend, Re-Use - Improve existing before creating new components
  • #9 Based on Facts - Use retrospective metrics and MoE confidence, not opinions
  • #10 Research When in Doubt - Check quality standard for guidance
  • #5 Eliminate Ambiguity - Clear success/failure criteria in improvements

Full Standard: CODITECT-STANDARD-AUTOMATION.md

ComponentPurpose
/optimize-skillsView skill health dashboard
/retrospectiveRun session retrospective
skill-pattern-analyzer.pyAnalyze skill patterns
SKILL-QUALITY-STANDARD.mdQuality requirements
component-indexer.pyDatabase indexing

Version: 1.0.0 | Created: 2026-01-03 | Author: CODITECT Team