Skip to main content

internal-component-activation-project-plan


title: Component Activation Infrastructure - PROJECT PLAN type: reference component_type: reference version: 1.0.0 created: '2025-12-27' updated: '2025-12-27' status: active tags:

  • ai-ml
  • deployment
  • security
  • testing
  • architecture
  • automation
  • configuration
  • data-processing summary: 'Component Activation Infrastructure - PROJECT PLAN Project ID: CODITECT-ACTIVATION-2025-11-29 Status: βœ… Phase 1-4 Complete Component Type Documented Callable -------------------------------- Agents 53 0 30 20 100% Commands 79 0 92 0 100% Hooks 0 0...' moe_confidence: 0.950 moe_classified: 2025-12-31 audience: contributor

Component Activation Infrastructure - PROJECT PLAN

Project ID: CODITECT-ACTIVATION-2025-11-29 Status: βœ… Phase 1-4 Complete | πŸ”„ Phase 5 In Progress Priority: P0 - Multi-Session Integration Created: 2025-11-29 Last Updated: 2025-11-29 Target Completion: Phase 5 - 2025-11-29 (5-7 hours)


πŸ“‹ EXECUTIVE SUMMARY​

The Problem​

CODITECT framework has all components as files but 0% are callable:

  • 60 agents exist but can't be invoked
  • all skills exist but can't be used
  • 89 commands exist but can't be executed
  • all scripts exist but aren't indexed
  • 17 hooks exist but aren't active

Root Cause: Missing activation infrastructure connecting component files β†’ runtime execution.

The Solution​

Build 3-layer activation system in 8-12 hours:

  1. Registry Population - Scan files, populate JSON registries
  2. Component Activation - Make components callable via Claude Code tools
  3. Framework Integration - Connect documentation β†’ execution

Success Criteria​

  • βœ… All 60 agents invocable via Task(subagent_type="agent-name")
  • βœ… All all skills accessible via Skill(skill="skill-name")
  • βœ… All 89 commands executable via /command-name
  • βœ… All all scripts discoverable via registries
  • βœ… coditect-router knows all all components

🎯 PROJECT OBJECTIVES​

Primary Objective​

Make all 290 CODITECT components discoverable, registered, and callable within 8-12 hours.

Secondary Objectives​

  1. Update all documentation to reflect actual component counts
  2. Synchronize registries with filesystem reality
  3. Enable end-to-end testing of component invocation
  4. Establish automated registry maintenance

Non-Objectives (Out of Scope)​

  • ❌ Creating new components (focus on activating existing)
  • ❌ Refactoring component architecture
  • ❌ Performance optimization (functional first, fast later)
  • ❌ User interface changes

πŸ“Š CURRENT STATE ASSESSMENT​

Components Inventory (Actual vs Documented)​

Component TypeFiles ExistDocumentedRegisteredCallableGap %
Agents605300100%
Skills3027200100%
Commands897900100%
Scripts921600100%
Hooks17000100%
TOTAL290175200100%

Infrastructure Status​

What Exists (Working):

  • βœ… Orchestrator (lib/orchestration/orchestrator.py - 625 lines)
  • βœ… Executor (lib/orchestration/executor.py - 753 lines)
  • βœ… Dispatcher (scripts/core/agent_dispatcher.py - 708 lines)
  • βœ… Agent Registry structure (lib/orchestration/agent_registry.py)

What's Missing (Critical):

  • ❌ Registry population system (scan files β†’ populate registries)
  • ❌ Component activation system (make callable via Claude Code)
  • ❌ Framework bridge (connect docs β†’ runtime)
  • ❌ Automated registry sync (keep registries updated)

Impact Assessment​

User Impact:

  • Users see only 60% of documented components (175/290)
  • Users can call 0% of components (0/290)
  • Framework appears far less capable than it is
  • Major subsystems completely hidden (Generative UI, Hooks, Git)

Developer Impact:

  • Can't test components programmatically
  • Can't use orchestration infrastructure
  • Can't validate component integration
  • Manual component invocation only

πŸ—οΈ ARCHITECTURE DESIGN​

System Architecture​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LAYER 1: COMPONENT FILES (Existing) β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ agents/*.md (60 files) β”‚
β”‚ skills/*/SKILL.md (30 files) β”‚
β”‚ commands/*.md (89 files) β”‚
β”‚ scripts/**/*.py,*.sh (92 files) β”‚
β”‚ hooks/**/* (17 files) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
πŸ”§ REGISTRY LOADER (NEW)
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LAYER 2: REGISTRIES (JSON) - To Be Populated β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ config/framework-registry.json β”‚
β”‚ skills/REGISTRY.json β”‚
β”‚ config/commands-registry.json (NEW) β”‚
β”‚ config/scripts-registry.json (NEW) β”‚
β”‚ config/hooks-registry.json (NEW) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
πŸ”§ COMPONENT ACTIVATOR (NEW)
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LAYER 3: RUNTIME (Callable) β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ AgentRegistry.agents = {60 agents} β”‚
β”‚ SkillRegistry.skills = {all skills} β”‚
β”‚ CommandRegistry.commands = {89 commands} β”‚
β”‚ ScriptRegistry.scripts = {all scripts} β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
πŸ”§ FRAMEWORK BRIDGE (NEW)
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LAYER 4: CLAUDE CODE TOOLS (Invocable) β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Task(subagent_type="git-workflow-orchestrator") β”‚
β”‚ Skill(skill="git-workflow-automation") β”‚
β”‚ SlashCommand(command="/git-sync") β”‚
β”‚ coditect-router "sync repositories" β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Component Design​

1. Registry Loader (scripts/core/registry_loader.py)​

Purpose: Scan filesystem, extract metadata, populate JSON registries

Key Methods:

class RegistryLoader:
def scan_agents(self) -> List[AgentMetadata]:
"""Scan agents/*.md, parse metadata, return agent configs"""

def scan_skills(self) -> List[SkillMetadata]:
"""Scan skills/*/SKILL.md, parse metadata"""

def scan_commands(self) -> List[CommandMetadata]:
"""Scan commands/*.md, parse metadata"""

def scan_scripts(self) -> List[ScriptMetadata]:
"""Scan scripts/**/*.py,*.sh, extract metadata"""

def scan_hooks(self) -> List[HookMetadata]:
"""Scan hooks/**/* files, extract hook configs"""

def update_all_registries(self):
"""Update all JSON registry files with scanned data"""

def validate_registries(self) -> ValidationReport:
"""Verify registry completeness and consistency"""

Metadata Extraction:

  • Parse frontmatter/YAML from markdown files
  • Extract description, capabilities, use cases, tools
  • Generate unique IDs from filenames
  • Preserve existing registry data (don't overwrite custom fields)

2. Component Activator (lib/activation/component_activator.py)​

Purpose: Load registries, instantiate runtime objects, make callable

Key Methods:

class ComponentActivator:
def __init__(self, registries: Dict[str, Path]):
"""Load all registry JSON files"""

def activate_agent(self, agent_id: str) -> AgentConfig:
"""Load agent config, register with AgentRegistry"""

def activate_skill(self, skill_id: str) -> SkillConfig:
"""Load skill config, expose via Skill tool"""

def activate_command(self, command_id: str) -> CommandConfig:
"""Load command config, register with SlashCommand"""

def activate_script(self, script_id: str) -> ScriptConfig:
"""Load script metadata, make programmatically callable"""

def activate_all(self):
"""Bulk activation of all components"""

def verify_callable(self, component_type: str, component_id: str) -> bool:
"""Test if component is actually callable"""

Activation Process:

  1. Load JSON registry
  2. Parse component metadata
  3. Create runtime configuration object
  4. Register with appropriate system (AgentRegistry, etc.)
  5. Validate callability
  6. Log activation status

3. Framework Bridge (lib/integration/framework_bridge.py)​

Purpose: Maintain sync between docs, registries, and runtime

Key Methods:

class FrameworkBridge:
def sync_documentation(self):
"""Update AGENT-INDEX.md, COMPLETE-INVENTORY.md with actual counts"""

def sync_registries(self):
"""Ensure JSON registries match filesystem"""

def validate_consistency(self) -> ConsistencyReport:
"""Check docs ↔ registries ↔ filesystem consistency"""

def test_end_to_end(self, component_type: str, component_id: str):
"""Full invocation test (docs β†’ registry β†’ runtime β†’ callable)"""

def generate_sync_report(self) -> SyncReport:
"""Report on sync status, gaps, inconsistencies"""

πŸ“… IMPLEMENTATION PHASES​

Phase 1: Documentation Sync (2-3 hours)​

Goal: Update all documentation to reflect actual component counts

Tasks:

  1. ☐ Update docs/multi-agent-reference/AGENT-INDEX.md

    • Add 7 missing agents (generative-ui-*, git-workflow-orchestrator, production-cleanup-orchestrator)
    • Update total count: 53 β†’ 60
    • Add subsystem sections (Generative UI, Git Workflow)
  2. ☐ Update docs/multi-agent-reference/COMPLETE-INVENTORY.md

    • Add 7 missing agents with full descriptions
    • Add 12 missing commands
    • Add 76 missing scripts
    • Add 17 hooks (new section)
    • Update all totals
  3. ☐ Create scripts/SCRIPTS-INDEX.md (NEW)

    • Document all all scripts by category
    • Core (23), Generated (14), LLM Execution (9), etc.
    • Usage examples for each category
  4. ☐ Create hooks/HOOKS-INDEX.md (NEW)

    • Document all 17 hooks (6 shell + 11 Python)
    • Hook purposes, activation, configuration

Deliverables:

  • Updated AGENT-INDEX.md (+7 agents)
  • Updated COMPLETE-INVENTORY.md (+115 components)
  • New SCRIPTS-INDEX.md (all scripts)
  • New HOOKS-INDEX.md (17 hooks)

Acceptance Criteria:

  • All all components documented
  • Accurate component counts in all indexes
  • Cross-references between documents consistent

Phase 2: Registry Population (3-4 hours)​

Goal: Build registry loader, scan filesystem, populate JSON registries

Tasks:

  1. ☐ Create scripts/core/registry_loader.py (NEW - 400-500 lines)

    • Implement AgentScanner (scan agents/*.md)
    • Implement SkillScanner (scan skills/*/SKILL.md)
    • Implement CommandScanner (scan commands/*.md)
    • Implement ScriptScanner (scan scripts/**/*.py, *.sh)
    • Implement HookScanner (scan hooks/**/*.sh, *.py)
    • Implement metadata parsers (YAML frontmatter, markdown headers)
    • Implement registry updaters (preserve existing data)
  2. ☐ Update config/framework-registry.json

    • Add 7 missing agents
    • Update total agent count to 60
    • Add missing agent metadata (capabilities, tools, use_cases)
  3. ☐ Update skills/REGISTRY.json

    • Add 10 missing skills (update from 20 β†’ 30)
    • Ensure all skills have proper metadata
  4. ☐ Create config/commands-registry.json (NEW)

    • Register all 89 commands
    • Include command metadata (description, action_policy, usage)
  5. ☐ Create config/scripts-registry.json (NEW)

    • Register all all scripts
    • Categorize by directory (core, generated, llm_execution, etc.)
  6. ☐ Create config/hooks-registry.json (NEW)

    • Register all 17 hooks
    • Include hook types, triggers, configurations
  7. ☐ Run registry_loader.py to populate all registries

    • Execute full scan
    • Generate updated JSON files
    • Validate output

Deliverables:

  • registry_loader.py (complete implementation)
  • Updated framework-registry.json (see config/component-counts.json)
  • Updated skills/REGISTRY.json (all skills)
  • New commands-registry.json (see config/component-counts.json)
  • New scripts-registry.json (all scripts)
  • New hooks-registry.json (17 hooks)

Acceptance Criteria:

  • All all components in JSON registries
  • Metadata complete and accurate
  • Registries validate successfully
  • No manual editing required (fully automated)

Phase 3: Component Activation (2-3 hours)​

Goal: Make all components callable via Claude Code tools

Tasks:

  1. ☐ Create lib/activation/component_activator.py (NEW - 300-400 lines)

    • Implement AgentActivator
    • Implement SkillActivator
    • Implement CommandActivator
    • Implement ScriptActivator
    • Implement bulk activation methods
    • Add validation and error handling
  2. ☐ Integrate with lib/orchestration/orchestrator.py

    • Load AgentRegistry from framework-registry.json on initialization
    • Populate orchestrator.registry with all 60 agents
    • Enable agent lookup by ID
  3. ☐ Integrate with scripts/core/agent_dispatcher.py

    • Load agent capabilities from framework-registry.json
    • Update agent_capabilities dict with all 60 agents
    • Enable agent recommendation for all agents
  4. ☐ Create activation startup script

    • scripts/activate-all-components.py (auto-run on framework load)
    • Loads all registries
    • Activates all components
    • Reports activation status
  5. ☐ Update coditect-router to load all registries

    • Load commands-registry.json
    • Load scripts-registry.json
    • Enable discovery of all all components

Deliverables:

  • component_activator.py (complete implementation)
  • Updated orchestrator.py (loads all agents)
  • Updated agent_dispatcher.py (knows all agents)
  • New activate-all-components.py script
  • Updated coditect-router

Acceptance Criteria:

  • All 60 agents invocable via Task tool
  • All all skills accessible (framework aware)
  • All 89 commands discoverable
  • All all scripts indexed
  • coditect-router returns results for all components

Phase 4: Framework Integration & Testing (1-2 hours)​

Goal: Verify end-to-end component invocation, establish sync process

Tasks:

  1. ☐ Create lib/integration/framework_bridge.py (NEW - 250-300 lines)

    • Implement documentation sync methods
    • Implement registry consistency validation
    • Implement end-to-end testing framework
  2. ☐ End-to-end invocation tests

    • Test agent invocation: Task(subagent_type="git-workflow-orchestrator", ...)
    • Test skill discovery: Query git-workflow-automation skill
    • Test command execution: /git-sync --dry-run
    • Test script discovery: coditect-router "sync repositories"
  3. ☐ Consistency validation

    • Verify docs ↔ registries consistency
    • Verify registries ↔ filesystem consistency
    • Verify runtime ↔ registries consistency
    • Generate validation report
  4. ☐ Automated sync process

    • Create pre-commit hook: Validate registries on commit
    • Create CI/CD check: Verify component consistency
    • Document registry maintenance workflow
  5. ☐ Create usage examples

    • Example: Invoking git-workflow-orchestrator agent
    • Example: Using git-workflow-automation skill
    • Example: Executing /git-sync command
    • Add to relevant documentation

Deliverables:

  • framework_bridge.py (complete implementation)
  • End-to-end test results (all passing)
  • Consistency validation report
  • Automated sync scripts (hooks, CI/CD)
  • Usage examples in docs

Acceptance Criteria:

  • All end-to-end tests pass
  • Consistency validation: 0 errors
  • Automated sync operational
  • Usage documentation complete

πŸ“Š SUCCESS METRICS​

Quantitative Metrics​

MetricBeforeTargetMeasurement
Documented Components175 (60%)290 (100%)Count in COMPLETE-INVENTORY.md
Registered Components20 (7%)290 (100%)Count across all JSON registries
Callable Components0 (0%)290 (100%)Successful invocation tests
Agent Invocability0/6060/60Task tool invocation success
Skill Accessibility0/3030/30Framework awareness
Command Executability0/8989/89SlashCommand execution
Router Coverage53 (18%)290 (100%)coditect-router results

Qualitative Metrics​

  • βœ… Users can discover all framework capabilities
  • βœ… Users can invoke all components without errors
  • βœ… Documentation accurately reflects framework state
  • βœ… Registries stay synchronized automatically
  • βœ… Framework feels complete and professional

Test Cases​

Test Case 1: Agent Invocation

# Should work after Phase 3
Task(
subagent_type="git-workflow-orchestrator",
description="Sync all repositories",
prompt="Synchronize all submodules with remote"
)
# Expected: Agent executes successfully

Test Case 2: Skill Discovery

# Should work after Phase 3
from lib.activation import ComponentActivator
activator = ComponentActivator()
skill = activator.get_skill("git-workflow-automation")
# Expected: Returns SkillConfig with metadata

Test Case 3: Command Execution

# Should work after Phase 3
/git-sync --target all --mode full
# Expected: Command executes, syncs repositories

Test Case 4: Router Discovery

# Should work after Phase 3
coditect-router "create react components"
# Expected: Suggests generative-ui-* agents and /ui command

⚠️ RISKS & MITIGATION​

High-Risk Items​

Risk 1: Registry Schema Mismatch

  • Probability: Medium
  • Impact: High (components won't load)
  • Mitigation:
    • Define strict JSON schema for each registry
    • Validate all registries against schema
    • Use TypeScript-style validation in Python (pydantic)

Risk 2: Component Metadata Parsing Errors

  • Probability: High
  • Impact: Medium (some components won't register)
  • Mitigation:
    • Implement robust markdown/YAML parsing
    • Handle missing/malformed metadata gracefully
    • Provide clear error messages for debugging
    • Create comprehensive test suite

Risk 3: Claude Code Integration Failures

  • Probability: Medium
  • Impact: Critical (components not callable)
  • Mitigation:
    • Test against actual Claude Code instance
    • Follow official integration patterns
    • Document workarounds for limitations
    • Engage with Claude Code team if needed

Risk 4: Performance Degradation

  • Probability: Low
  • Impact: Medium (slow startup times)
  • Mitigation:
    • Lazy-load registries (only when needed)
    • Cache parsed metadata
    • Optimize registry file sizes
    • Profile and optimize hot paths

Medium-Risk Items​

Risk 5: Documentation Drift

  • Probability: High (over time)
  • Impact: Medium (confusion, errors)
  • Mitigation:
    • Automated sync process (pre-commit hooks)
    • CI/CD validation checks
    • Regular audits (monthly)
    • Clear maintenance workflow

Risk 6: Breaking Changes in Component APIs

  • Probability: Medium
  • Impact: Medium (activation failures)
  • Mitigation:
    • Version component metadata schemas
    • Support backward compatibility
    • Gradual migration path
    • Comprehensive testing

Phase 5: Multi-Session Integration (5-7 hours) βœ… NEW​

Goal: Integrate Anthropic's multi-session pattern for automatic component activation across Claude Code sessions

Background: After completing Phases 1-4, all 288 components are documented, registered, and callable - but activation requires manual execution each session. Anthropic's official multi-session pattern (discovered via research 2025-11-29) solves this exact problem.

Our Current State (90% Aligned):

  • βœ… Filesystem component storage (agents/.md, skills//SKILL.md, etc.)
  • βœ… Metadata registry (framework-registry.json - 117KB, 288 components)
  • βœ… Init script (scripts/init.sh)
  • βœ… Session context preservation (MEMORY-CONTEXT/)
  • βœ… Git-based tracking
  • βœ… ComponentActivator (567 lines) - loads and activates components

The 10% Gap (What Phase 5 Adds):

  • ❌ component-activation-status.json (tracks which components are active - like Anthropic's feature_list.json)
  • ❌ Enhanced init.sh (automatically loads activation status at session start)
  • ❌ Session startup workflow (scripts/session-startup.py)
  • ❌ setting_sources configuration (.claude/settings.json for SDK discovery)

Implementation Sub-Phases:

Phase 5.1: Foundation (2-3 hours)

  1. Design activation status schema (30 min)
  2. Create scripts/generate-activation-status.py (1 hour)
    • Scan all 288 components
    • Generate .claude/component-activation-status.json
    • Use JSON format (prevents accidental LLM modification)
  3. Update .claude/settings.json with setting_sources configuration (30 min)
  4. Commit baseline (15 min)

Phase 5.2: Session Initialization (2-3 hours)

  1. Enhance scripts/init.sh (1.5 hours)
    • Load component-activation-status.json
    • Verify critical components exist
    • Activate project-specific components
    • Display activation summary
  2. Create supporting scripts (1 hour):
    • scripts/verify-critical-components.py - Validate essential components
    • scripts/activate-project-components.py - Activate based on project type
  3. Create scripts/session-startup.py (30 min) - Full workflow automation
  4. Document workflow (30 min)

Phase 5.3: Activation Management (1-2 hours)

  1. Implement scripts/update-component-activation.py (1 hour)
    • Activate/deactivate components
    • Update status JSON atomically
    • Git-track all activation changes
  2. Create CLI interface (30 min)
    • coditect activate <component-type> <component-name>
    • coditect deactivate <component-type> <component-name>
    • coditect status - Show activation summary
  3. Test activation workflow (30 min)

Phase 5.4: Validation & Documentation (30-60 min)

  1. Cross-reference with Anthropic docs (15 min)
  2. Integration testing (30 min)
    • Test session startup sequence
    • Verify automatic component activation
    • Validate git tracking
  3. Update documentation (15 min)
    • Update CLAUDE.md with session startup instructions
    • Add Phase 5 completion to PROJECT-PLAN.md
    • Document activation management commands

Deliverables:

  • .claude/component-activation-status.json (activation state tracker)
  • Enhanced scripts/init.sh (automatic activation)
  • scripts/generate-activation-status.py (initialization)
  • scripts/session-startup.py (workflow orchestration)
  • scripts/verify-critical-components.py (validation)
  • scripts/activate-project-components.py (intelligent activation)
  • scripts/update-component-activation.py (management)
  • Updated .claude/settings.json (SDK configuration)
  • Updated documentation (CLAUDE.md, README.md)

Acceptance Criteria:

  • βœ… Component activation automatic on session start
  • βœ… Status persists across sessions (via JSON + git)
  • βœ… Init.sh completes in <5 seconds
  • βœ… No manual activation required
  • βœ… 100% alignment with Anthropic multi-session pattern

Success Metrics:

  • Session startup time: <5 seconds
  • Components auto-activated: 245/288 (85%)
  • Manual intervention: 0% (fully automatic)
  • Pattern alignment: 100% (matches Anthropic official guidance)

Related Documents:


πŸ“¦ DELIVERABLES​

Phase 1 Deliverables​

  • Updated AGENT-INDEX.md
  • Updated COMPLETE-INVENTORY.md
  • New SCRIPTS-INDEX.md
  • New HOOKS-INDEX.md

Phase 2 Deliverables​

  • scripts/core/registry_loader.py (NEW)
  • Updated config/framework-registry.json
  • Updated skills/REGISTRY.json
  • New config/commands-registry.json
  • New config/scripts-registry.json
  • New config/hooks-registry.json

Phase 3 Deliverables​

  • lib/activation/component_activator.py (NEW)
  • Updated lib/orchestration/orchestrator.py
  • Updated scripts/core/agent_dispatcher.py
  • New scripts/activate-all-components.py
  • Updated coditect-router

Phase 4 Deliverables​

  • lib/integration/framework_bridge.py (NEW) - βœ… Complete (415 lines)
  • End-to-end test suite - βœ… Validated
  • Consistency validation report - βœ… Generated
  • Automated sync scripts - βœ… Operational
  • Usage documentation - βœ… Complete

Phase 5 Deliverables (Multi-Session Integration)​

  • .claude/component-activation-status.json (activation state)
  • Enhanced scripts/init.sh (auto-activation)
  • scripts/generate-activation-status.py (initialization)
  • scripts/session-startup.py (workflow orchestration)
  • scripts/verify-critical-components.py (validation)
  • scripts/activate-project-components.py (intelligent activation)
  • scripts/update-component-activation.py (management CLI)
  • Updated .claude/settings.json (SDK configuration)
  • Updated documentation (CLAUDE.md, README.md)

πŸ‘₯ TEAM & RESOURCES​

Roles​

Primary Developer: Claude Code (Autonomous AI Agent)

  • Implements all code changes
  • Writes documentation
  • Runs tests
  • Commits to git

Project Manager: Claude Code (Self-Managed)

  • Tracks progress via TodoWrite tool
  • Updates PROJECT-PLAN.md
  • Manages dependencies
  • Reports status

QA Engineer: Claude Code (Self-Validation)

  • Writes test cases
  • Executes validation
  • Reports issues
  • Verifies fixes

Reviewer: Human (Hal Casteel)

  • Reviews final implementation
  • Approves for production
  • Provides feedback
  • Makes go/no-go decisions

Resources Required​

Compute:

  • Claude Code session (8-12 hours)
  • Local development environment

Storage:

  • ~500 KB for new registries
  • ~100 KB for new code files
  • ~50 KB for updated docs

External Dependencies:

  • None (all internal work)

πŸ“… TIMELINE​

Target Schedule (8-12 hours)​

Hour 0-3: Phase 1 (Documentation)

  • 0:00-1:00 - Update AGENT-INDEX.md, COMPLETE-INVENTORY.md
  • 1:00-2:00 - Create SCRIPTS-INDEX.md
  • 2:00-3:00 - Create HOOKS-INDEX.md, review/validate

Hour 3-7: Phase 2 (Registry Population)

  • 3:00-5:00 - Implement registry_loader.py
  • 5:00-6:00 - Run registry loader, populate all registries
  • 6:00-7:00 - Validate registries, fix issues

Hour 7-10: Phase 3 (Component Activation)

  • 7:00-8:30 - Implement component_activator.py
  • 8:30-9:30 - Integrate with orchestrator, dispatcher
  • 9:30-10:00 - Create activation script, test

Hour 10-12: Phase 4 (Integration & Testing)

  • 10:00-10:30 - Implement framework_bridge.py
  • 10:30-11:00 - Run end-to-end tests
  • 11:00-11:30 - Fix issues, validate consistency
  • 11:30-12:00 - Final documentation, commit

Milestones​

  • M1 (Hour 3): All components documented βœ“
  • M2 (Hour 7): All registries populated βœ“
  • M3 (Hour 10): All components activated βœ“
  • M4 (Hour 12): All tests passing, production-ready βœ“

βœ… ACCEPTANCE CRITERIA​

Phase 1-4 Acceptance (βœ… COMPLETE)​

  • All 60 agents documented in AGENT-INDEX.md
  • All 60 agents registered in framework-registry.json
  • All 60 agents invocable via Task tool (ComponentActivator operational)
  • All all skills registered in REGISTRY.json
  • framework-registry.json complete (117KB, 288 components)
  • RegistryLoader operational (641 lines)
  • ComponentActivator operational (567 lines)
  • FrameworkBridge operational (415 lines)
  • End-to-end tests validated
  • Automated registry sync process operational
  • Consistency validation reports generated

Phase 5 Acceptance (⏸️ PENDING)​

Must-Have (P0):

  • component-activation-status.json created (activation state tracker)
  • Enhanced scripts/init.sh loads activation status automatically
  • scripts/generate-activation-status.py generates initial status
  • scripts/session-startup.py orchestrates full workflow
  • .claude/settings.json configured with setting_sources
  • Session startup completes in <5 seconds
  • Components auto-activate on session start (no manual intervention)
  • All activation changes git-tracked

Should-Have (P1):

  • scripts/update-component-activation.py for runtime management
  • CLI commands: coditect activate/deactivate/status
  • Intelligent component activation based on project type
  • Critical component validation (verify-critical-components.py)

Nice-to-Have (P2):

  • Performance optimization (lazy loading, caching)
  • Enhanced error messages for failed activations
  • Component activation analytics
  • Interactive activation browser

πŸ“ NOTES & ASSUMPTIONS​

Assumptions​

  1. Component file formats are consistent (markdown with YAML frontmatter)
  2. Claude Code Tool invocation patterns are stable
  3. JSON registry schemas can accommodate all metadata
  4. No breaking changes to existing components during activation
  5. Human reviewer available for final approval

Dependencies​

Internal:

  • Existing orchestration infrastructure (lib/orchestration/)
  • Existing agent dispatcher (scripts/core/agent_dispatcher.py)
  • Git repository structure

External:

  • Claude Code platform (for Tool invocation)
  • Python 3.10+ environment
  • Git for version control

Constraints​

Technical:

  • Must not break existing functionality
  • Must maintain backward compatibility
  • Performance overhead < 100ms for activation

Time:

  • Must complete in 8-12 hours (same day)
  • Human review required before production

Resource:

  • Single Claude Code agent session
  • Local development environment only

πŸ“š REFERENCES​

Technical References​

  • orchestrator.py implementation
  • executor.py implementation
  • agent_dispatcher.py implementation
  • Claude Code Tool documentation

Project Plan Version: 1.0 Last Updated: 2025-11-29 Next Review: After Phase 1 completion