ADR-120: Unified Database Daemon & Script Consolidation
Status
Proposed
Executive Summary
This ADR proposes a two-pronged approach to reduce CODITECT's database complexity:
- Unified Database Daemon (
codi-db): A single Rust binary that handles ALL database write operations - Script Consolidation: Reduce 60+ Python scripts to ~10 domain-based CLI tools
This eliminates lock conflicts, ensures schema consistency in one testable location, and dramatically reduces code maintenance overhead. The 60+ individual scripts become fallbacks only, with all primary database operations routed through the daemon.
Context
Current State: Distributed Writers
CODITECT currently has 60+ Python scripts and hooks that write directly to SQLite databases:
┌─────────────────────────────────────────────────────────────────────┐
│ CURRENT ARCHITECTURE: Distributed Writers │
│ │
│ [component-indexer.py] ──────┐ │
│ [unified-message-extractor.py] ──┼──► [platform.db] │
│ [session-retrospective.py] ──────┼──► [org.db] ◄── PROBLEMS │
│ [task-plan-sync.py] ─────────────┼──► [sessions.db] │
│ [proactive-error-suggester.py] ──┤ │
│ [56 other scripts...] ───────────┘ │
│ │
│ Problems: │
│ • Lock conflicts (index.lock, database locks) │
│ • Race conditions between concurrent writers │
│ • Schema drift across 60+ implementations │
│ • No transaction batching │
│ • Inconsistent error handling │
│ • Python startup overhead on every write │
│ │
└─────────────────────────────────────────────────────────────────────┘
Evidence of Problems
-
Git index.lock conflicts (fixed in commit
edae43da):- Pre-commit hook spawned git processes that competed for locks
- Required lock-free rewrite to prevent deadlocks
-
Database lock contention:
- Multiple scripts writing simultaneously cause
SQLITE_BUSY - Retry logic scattered across 60+ files with inconsistent behavior
- Multiple scripts writing simultaneously cause
-
Schema inconsistency:
- Each script defines its own table schemas
- Migration logic duplicated and potentially divergent
The codi-watcher Precedent
The codi-watcher Rust binary (v0.4.0) demonstrates that consolidating functionality into Rust works well:
- Single responsibility: Watches sessions, triggers exports, processes pending files
- Efficient: Reads last 100KB of JSONL, parses tokens natively
- Reliable: Runs as launchd daemon 24/7
- Clean boundary: Delegates to Python only for
unified-message-extractor.py
This ADR extends that pattern to ALL database writes AND consolidates the scripts themselves.
The Script Proliferation Problem
Beyond lock conflicts, 60+ scripts create maintenance overhead:
| Problem | Impact |
|---|---|
| 60+ schema definitions | Schema drift, inconsistent migrations |
| 60+ error handlers | Inconsistent retry logic, silent failures |
| 60+ path discovery patterns | ADR-114 compliance requires updating each file |
| Python startup overhead | ~100-200ms per script invocation |
| Testing burden | Each script needs individual test coverage |
| Documentation debt | 60+ files to document and maintain |
Current script count by domain:
| Domain | Scripts | Examples |
|---|---|---|
| Component indexing | 8 | component-indexer.py, component-frontmatter-indexer.py, component-discover.py |
| Context/Memory | 12 | unified-message-extractor.py, context-db.py, memory-retrieval.py |
| Session management | 7 | session-retrospective.py, auto-session-namer.py |
| Sync/Migration | 6 | task-plan-sync.py, cloud-sync-client.py |
| Learning/Skills | 5 | skill-pattern-analyzer.py, learning-db-migrate.py |
| Analytics | 4 | token-economics.py, tool-analytics.py |
| Setup/Config | 10 | CODITECT-CORE-INITIAL-SETUP.py, backup-context-db.sh |
| Hooks (db-writing) | 12 | proactive-error-suggester.py, task-tracking-enforcer.py |
| Total | 64 |
Decision
1. Implement a Unified Database Daemon (codi-db)
Create a single Rust binary that handles ALL database write operations across all four tiers (ADR-118).
┌─────────────────────────────────────────────────────────────────────┐
│ PROPOSED ARCHITECTURE: Single Writer Daemon │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Python Scripts / Hooks / codi-watcher │ │
│ │ (Become thin clients - prepare data, call daemon) │ │
│ └─────────────────────────┬───────────────────────────────────┘ │
│ │ │
│ ▼ Unix Socket IPC │
│ ~/.coditect/run/codi-db.sock │
│ │ │
│ ┌─────────────────────────▼───────────────────────────────────┐ │
│ │ codi-db (Rust Daemon) │ │
│ │ │ │
│ │ • Single writer for ALL databases │ │
│ │ • Schema validation at write time │ │
│ │ • Transaction batching & WAL mode │ │
│ │ • Consistent error handling │ │
│ │ • Metrics & observability │ │
│ │ │ │
│ └──────┬──────────────┬──────────────┬──────────────┬─────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ [platform.db] [org.db] [sessions.db] [project.db] │
│ Tier 1 Tier 2 Tier 3 Tier 4 │
│ │
└─────────────────────────────────────────────────────────────────────┘
API Design
The daemon exposes a JSON-RPC API over Unix socket:
// Endpoint structure
POST /v1/messages // Session messages (Tier 3)
POST /v1/tool_analytics // Tool usage analytics (Tier 3)
POST /v1/token_economics // Token usage data (Tier 3)
POST /v1/decisions // Architectural decisions (Tier 2)
POST /v1/skill_learnings // Accumulated learnings (Tier 2)
POST /v1/error_solutions // Error-solution pairs (Tier 2)
POST /v1/components // Component index (Tier 1)
POST /v1/tasks // Task tracking (Tier 3)
// Batch endpoint for high-throughput
POST /v1/batch // Multiple operations in single request
// Query endpoints (read-only, for validation)
GET /v1/health // Daemon health check
GET /v1/stats // Database statistics
GET /v1/schema // Current schema versions
Request/Response Format
// Request
{
"jsonrpc": "2.0",
"method": "insert_messages",
"params": {
"tier": 3,
"table": "messages",
"rows": [
{
"session_id": "uuid",
"role": "assistant",
"content": "...",
"timestamp": "2026-01-26T14:00:00Z"
}
]
},
"id": 1
}
// Response
{
"jsonrpc": "2.0",
"result": {
"inserted": 1,
"duration_ms": 2
},
"id": 1
}
Python Client Library
Provide a drop-in replacement for direct SQLite access:
# OLD: Direct SQLite access (60+ implementations)
import sqlite3
conn = sqlite3.connect(str(SESSIONS_DB))
cursor = conn.execute("INSERT INTO messages ...")
conn.commit()
# NEW: Daemon client (single implementation)
from coditect.db import CodiDB
db = CodiDB() # Connects to Unix socket
db.insert_messages([
{"session_id": "uuid", "role": "assistant", "content": "..."}
])
# Automatically batched, validated, and committed
Daemon Architecture
// codi-db/src/main.rs
use tokio::net::UnixListener;
#[tokio::main]
async fn main() -> Result<()> {
let socket_path = get_socket_path();
let listener = UnixListener::bind(&socket_path)?;
// Initialize database connections (one per tier)
let db_pool = DatabasePool::new()?;
// Start metrics collector
let metrics = Metrics::new();
// Accept connections
loop {
let (stream, _) = listener.accept().await?;
let db = db_pool.clone();
let metrics = metrics.clone();
tokio::spawn(async move {
handle_connection(stream, db, metrics).await
});
}
}
Transaction Batching
The daemon automatically batches writes for performance:
// Collect writes for 10ms or until 1000 rows
struct WriteBatcher {
pending: Vec<WriteOp>,
flush_interval: Duration,
max_batch_size: usize,
}
impl WriteBatcher {
async fn add(&mut self, op: WriteOp) {
self.pending.push(op);
if self.pending.len() >= self.max_batch_size {
self.flush().await;
}
}
async fn flush(&mut self) {
// Single transaction for all pending writes
let tx = self.db.begin().await?;
for op in self.pending.drain(..) {
op.execute(&tx).await?;
}
tx.commit().await?;
}
}
Schema Management
The daemon owns all schemas and validates writes:
// Schema definitions compiled into binary
const SCHEMA_MESSAGES: &str = include_str!("schemas/messages.sql");
const SCHEMA_DECISIONS: &str = include_str!("schemas/decisions.sql");
// ...
// Validation at write time
fn validate_message(row: &MessageRow) -> Result<()> {
ensure!(!row.session_id.is_empty(), "session_id required");
ensure!(["user", "assistant", "system"].contains(&row.role.as_str()),
"invalid role");
Ok(())
}
2. Consolidate Scripts into Domain CLIs
Reduce 64 scripts to 10 domain-based CLI tools that call the daemon:
┌─────────────────────────────────────────────────────────────────────┐
│ SCRIPT CONSOLIDATION: 64 → 10 │
│ │
│ BEFORE (64 scripts): AFTER (10 CLIs): │
│ │
│ component-indexer.py ─┐ │
│ component-frontmatter.py │ codi component index │
│ component-discover.py ├───► codi component discover │
│ component-lifecycle.py │ codi component validate │
│ component-db-cli.py ─┘ codi component stats │
│ │
│ unified-message-extractor ─┐ │
│ context-db.py │ codi context extract │
│ memory-retrieval.py ├───► codi context query │
│ context-snapshot.py │ codi context snapshot │
│ unified-search.py ─┘ codi context search │
│ │
│ session-retrospective.py ─┐ codi session retro │
│ auto-session-namer.py ├───► codi session name │
│ session-log-git-sync.py ─┘ codi session sync │
│ │
│ task-plan-sync.py ─┐ codi sync push │
│ cloud-sync-client.py ├───► codi sync pull │
│ backup-context-db.sh ─┘ codi sync backup │
│ │
│ skill-pattern-analyzer.py ─┐ codi learn analyze │
│ learning-db-migrate.py ├───► codi learn migrate │
│ skill-selector.py ─┘ codi learn select │
│ │
└─────────────────────────────────────────────────────────────────────┘
Consolidated CLI Structure:
| CLI | Subcommands | Replaces |
|---|---|---|
codi component | index, discover, validate, stats, lifecycle | 8 scripts |
codi context | extract, query, snapshot, search, stats | 12 scripts |
codi session | retro, name, sync, export, list | 7 scripts |
codi sync | push, pull, backup, restore, status | 6 scripts |
codi learn | analyze, migrate, select, patterns | 5 scripts |
codi analytics | tokens, tools, report | 4 scripts |
codi setup | install, update, verify, migrate | 10 scripts |
codi hook | dispatch, validate, run | 12 hooks → 1 dispatcher |
codi db | status, migrate, vacuum, export | daemon management |
codi test | all, schema, integration | testing |
Total: 64 scripts → 10 CLIs with ~50 subcommands
Unified Schema Location
All database schemas defined in ONE location, compiled into the daemon:
tools/codi-db/schemas/
├── tier1-platform/
│ ├── components.sql
│ ├── component_fts.sql
│ └── migrations/
│ ├── 001_initial.sql
│ └── 002_add_model_binding.sql
├── tier2-org/
│ ├── decisions.sql
│ ├── skill_learnings.sql
│ ├── error_solutions.sql
│ └── migrations/
├── tier3-sessions/
│ ├── messages.sql
│ ├── tool_analytics.sql
│ ├── token_economics.sql
│ └── migrations/
└── tier4-project/
├── tasks.sql
└── migrations/
Benefits:
- One source of truth: All schemas in
tools/codi-db/schemas/ - Compile-time validation: Rust includes schemas at build time
- Automated migrations: Daemon runs migrations on startup
- Testable: Single test suite covers all schema operations
Testing Strategy
Single test suite for all database operations:
// tools/codi-db/tests/integration_tests.rs
#[tokio::test]
async fn test_messages_schema() {
let db = TestDb::new_tier3().await;
// Insert
let id = db.insert_message(MessageRow {
session_id: "test-session",
role: "assistant",
content: "Hello",
..Default::default()
}).await.unwrap();
// Verify
let msg = db.get_message(id).await.unwrap();
assert_eq!(msg.role, "assistant");
}
#[tokio::test]
async fn test_schema_migration() {
let db = TestDb::new_empty().await;
db.run_migrations().await.unwrap();
// Verify all tables exist
assert!(db.table_exists("messages").await);
assert!(db.table_exists("decisions").await);
assert!(db.table_exists("components").await);
}
#[tokio::test]
async fn test_cross_tier_integrity() {
// Verify foreign keys between tiers work correctly
let db = TestDb::new_all_tiers().await;
// ...
}
Test Coverage:
| Test Type | Location | Coverage |
|---|---|---|
| Unit tests | codi-db/src/*.rs | Schema validation, batching logic |
| Integration tests | codi-db/tests/ | Full CRUD operations, migrations |
| Python client tests | scripts/tests/test_codi_db.py | Client library, fallback behavior |
| End-to-end tests | scripts/tests/test_e2e_db.py | CLI → Daemon → DB flow |
Fallback Mode
Original scripts remain as fallbacks when daemon unavailable:
# scripts/core/codi_db.py
class CodiDB:
def __init__(self):
self.socket_path = Path.home() / ".coditect" / "run" / "codi-db.sock"
self._fallback_mode = False
def insert_messages(self, rows: list[dict]) -> int:
if self._connect_to_daemon():
return self._daemon_insert("messages", rows)
else:
# Fallback: Direct SQLite (legacy behavior)
warnings.warn("codi-db daemon unavailable, using direct SQLite")
self._fallback_mode = True
return self._direct_sqlite_insert(rows)
def _connect_to_daemon(self) -> bool:
if not self.socket_path.exists():
return False
try:
self._socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self._socket.connect(str(self.socket_path))
return True
except ConnectionRefusedError:
return False
Fallback triggers:
- Daemon socket doesn't exist
- Daemon not responding (timeout)
- Daemon returns error
Fallback logging:
- All fallback operations logged to
~/.coditect/logs/codi-db-fallback.log - Alert if fallback rate exceeds 5% of operations
Migration Strategy
Phase 1: Daemon Foundation (Week 1-2)
Goal: Daemon running with /v1/messages endpoint + Python client
- Create
tools/codi-db/Rust project with schema directory - Implement Unix socket listener
- Implement
/v1/messagesendpoint with schema validation - Create Python client library
scripts/core/codi_db.pywith fallback - Create test suite for daemon
- launchd plist for daemon
Deliverables:
codi-dbbinary (v0.1.0)codi_db.pyclient with fallback mode- Schema tests passing
- launchd plist
Phase 2: First CLI Consolidation (Week 3-4)
Goal: codi context CLI replacing 12 context/memory scripts
- Implement
/v1/components,/v1/tool_analytics,/v1/token_economicsendpoints - Create
codiCLI framework (Rust with clap) - Implement
codi contextsubcommands:codi context extract(replaces unified-message-extractor.py)codi context query(replaces context-db.py)codi context search(replaces unified-search.py)codi context snapshot(replaces context-snapshot.py)
- Mark original 12 scripts as DEPRECATED (kept as fallback)
Deliverables:
codi contextCLI with 6 subcommands- 12 scripts marked deprecated
- Migration guide for users
Phase 3: Component & Knowledge Consolidation (Week 5-6)
Goal: codi component and codi learn CLIs
- Implement Tier 2 endpoints (decisions, skill_learnings, error_solutions)
- Implement
codi componentsubcommands:codi component index(replaces component-indexer.py + frontmatter-indexer.py)codi component discover(replaces component-discover.py)codi component validate(replaces component-lifecycle.py)
- Implement
codi learnsubcommands:codi learn analyze(replaces skill-pattern-analyzer.py)codi learn migrate(replaces learning-db-migrate.py)
- Mark original 13 scripts as DEPRECATED
Deliverables:
codi componentCLI (5 subcommands, 8 scripts deprecated)codi learnCLI (4 subcommands, 5 scripts deprecated)- All Tier 2 (critical) writes via daemon
Phase 4: Session & Sync Consolidation (Week 7-8)
Goal: codi session, codi sync, codi analytics CLIs
- Implement remaining endpoints (tasks, etc.)
- Implement
codi sessionsubcommands:codi session retro(replaces session-retrospective.py)codi session name(replaces auto-session-namer.py)codi session sync(replaces session-log-git-sync.py)
- Implement
codi syncsubcommands:codi sync push/pull(replaces task-plan-sync.py, cloud-sync-client.py)codi sync backup(replaces backup-context-db.sh)
- Implement
codi analyticssubcommands - Mark original 17 scripts as DEPRECATED
Deliverables:
codi sessionCLI (5 subcommands)codi syncCLI (5 subcommands)codi analyticsCLI (3 subcommands)- 100% of database writes via daemon
Phase 5: Hook Unification & Setup (Week 9-10)
Goal: codi hook and codi setup CLIs
- Create
codi hook dispatch- single entry point for all database-writing hooks - Update Claude Code settings.json to use
codi hook dispatch - Create
codi setupsubcommands:codi setup install(replaces CODITECT-CORE-INITIAL-SETUP.py)codi setup verify(replaces multiple verification scripts)codi setup migrate(database migrations)
- Mark original 22 scripts/hooks as DEPRECATED
Deliverables:
codi hookCLI (single dispatcher)codi setupCLI (4 subcommands)- 12 hooks consolidated into 1
Phase 6: Cleanup & Documentation (Week 11-12)
Goal: Remove deprecated code, finalize documentation
- Move 64 deprecated scripts to
scripts/deprecated/(kept as fallback reference) - Add lint rule to prevent new direct SQLite access
- Update CLAUDE.md with new patterns
- Update all documentation to reference
codiCLI - Performance optimization and load testing
- Create migration guide for users still on old scripts
Deliverables:
- 64 scripts moved to deprecated/
- 10 active CLIs with ~50 subcommands
- Complete documentation
- Migration guide
Consequences
Positive
- No more lock conflicts: Single writer eliminates race conditions
- Schema consistency: One source of truth for all table definitions in
tools/codi-db/schemas/ - Transaction batching: 10-100x write performance improvement
- Better observability: Centralized metrics and logging
- Dramatic code reduction: 64 scripts → 10 CLIs (85% reduction in entry points)
- Single test location: All database tests in one Rust test suite
- Reliable backups: Daemon ensures WAL checkpoints before backup
- Graceful degradation: Fallback to original scripts if daemon unavailable
- Faster invocation: Rust CLI startup ~10ms vs Python ~100-200ms
- Easier maintenance: Update one CLI vs 64 scripts for path/API changes
Negative
- New dependency: Daemon must be running for writes to succeed
- Learning curve: Users must learn
codi <domain> <action>pattern - Migration effort: 12-week migration timeline
- Two languages: Rust daemon + Python fallbacks (temporary)
Mitigations
| Risk | Mitigation |
|---|---|
| Daemon not running | Fallback to direct SQLite scripts with warning |
| Daemon crashes | launchd auto-restart, write queue persistence |
| User confusion | Clear migration guide, deprecation warnings |
| Rust complexity | Well-documented codebase, team training |
| Fallback drift | Fallback scripts frozen, not enhanced |
Alternatives Considered
Alternative 1: Keep Distributed Writers
Rejected because:
- Lock conflicts will continue
- Schema drift will worsen
- No path to better performance
Alternative 2: Python Daemon
Rejected because:
- Python startup overhead
- GIL limits concurrency
- Less reliable for 24/7 daemon
Alternative 3: Rust Library with PyO3 Bindings
Considered but deferred:
- Higher complexity (Rust + Python interop)
- Still have distributed writers
- Could be Phase 6 optimization
Alternative 4: SQLite WAL + Advisory Locks
Rejected because:
- Doesn't solve schema consistency
- Doesn't provide batching
- Complex coordination logic still needed
Implementation Notes
Cargo.toml Dependencies
codi-db (daemon):
[package]
name = "codi-db"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1.41", features = ["full", "net"] }
rusqlite = { version = "0.32", features = ["bundled"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
jsonrpc-core = "18.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
anyhow = "1.0"
dirs = "5.0"
chrono = { version = "0.4", features = ["serde"] }
codi-cli (unified CLI):
[package]
name = "codi"
version = "0.1.0"
edition = "2021"
[[bin]]
name = "codi"
path = "src/main.rs"
[dependencies]
clap = { version = "4.5", features = ["derive"] }
tokio = { version = "1.41", features = ["full", "net"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
dirs = "5.0"
chrono = { version = "0.4", features = ["serde"] }
colored = "2.1"
indicatif = "0.17" # Progress bars
File Locations
tools/codi-db/ # Database daemon
├── Cargo.toml
├── src/
│ ├── main.rs # Daemon entry point
│ ├── server.rs # Unix socket server
│ ├── handlers.rs # Request handlers
│ ├── database.rs # SQLite connection pool
│ ├── schema.rs # Schema definitions
│ ├── validation.rs # Write validation
│ ├── batching.rs # Transaction batching
│ └── metrics.rs # Observability
├── schemas/ # SINGLE SOURCE OF TRUTH
│ ├── tier1-platform/
│ │ ├── components.sql
│ │ └── migrations/
│ ├── tier2-org/
│ │ ├── decisions.sql
│ │ ├── skill_learnings.sql
│ │ └── migrations/
│ ├── tier3-sessions/
│ │ ├── messages.sql
│ │ ├── tool_analytics.sql
│ │ └── migrations/
│ └── tier4-project/
│ └── tasks.sql
├── tests/
│ ├── integration_tests.rs # Full CRUD tests
│ ├── schema_tests.rs # Migration tests
│ └── fixtures/
└── README.md
tools/codi-cli/ # Unified CLI
├── Cargo.toml
├── src/
│ ├── main.rs # CLI entry point
│ ├── commands/
│ │ ├── mod.rs
│ │ ├── component.rs # codi component *
│ │ ├── context.rs # codi context *
│ │ ├── session.rs # codi session *
│ │ ├── sync.rs # codi sync *
│ │ ├── learn.rs # codi learn *
│ │ ├── analytics.rs # codi analytics *
│ │ ├── hook.rs # codi hook *
│ │ ├── setup.rs # codi setup *
│ │ ├── db.rs # codi db *
│ │ └── test.rs # codi test *
│ └── client.rs # Daemon client
└── README.md
scripts/
├── core/
│ └── codi_db.py # Python client (fallback)
└── deprecated/ # Original scripts (fallback only)
├── README.md # "These are fallbacks, use codi CLI"
├── component-indexer.py
├── unified-message-extractor.py
└── ... (62 more)
config/
└── codi-db.json # Daemon configuration
launchd Configuration
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.coditect.codi-db</string>
<key>ProgramArguments</key>
<array>
<string>/Users/USER/.coditect/bin/codi-db</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/USER/.coditect/logs/codi-db.log</string>
<key>StandardErrorPath</key>
<string>/Users/USER/.coditect/logs/codi-db.error.log</string>
</dict>
</plist>
Success Metrics
| Metric | Current | Target |
|---|---|---|
| Lock conflict errors/day | ~5 | 0 |
| Average write latency | 50ms | 5ms (batched) |
| Active database scripts | 64 | 10 CLIs |
| Schema definition locations | 60+ | 1 (tools/codi-db/schemas/) |
| Database backup reliability | 95% | 99.9% |
| Test coverage (db operations) | ~30% scattered | 90% in single suite |
| Lines of database code | ~15,000 | ~5,000 |
| CLI startup time | 100-200ms | <20ms |
| Fallback usage rate | N/A | <5% of operations |
References
- ADR-118: Four-Tier Database Architecture
- ADR-114: User Data Separation from Framework
- codi-watcher source:
tools/context-watcher/ - SQLite WAL mode: https://sqlite.org/wal.html
- Rust rusqlite: https://docs.rs/rusqlite/
- Clap CLI framework: https://docs.rs/clap/
Appendix: Script Inventory
Scripts to be consolidated (64 total):
| Domain | Count | Scripts |
|---|---|---|
| Component | 8 | component-indexer.py, component-frontmatter-indexer.py, component-discover.py, component-lifecycle.py, component-db-cli.py, component-improvement-orchestrator.py, ensure_component_registered.py, component_metadata_registry.py |
| Context | 12 | unified-message-extractor.py, context-db.py, context-snapshot.py, unified-search.py, memory-retrieval.py, test-memory-retrieval.py, context-query.py, semantic-search-cli.py, embed-text.py, vector-store.py, context-index-rebuild.py, dedup-messages.py |
| Session | 7 | session-retrospective.py, auto-session-namer.py, session-log-git-sync.py, session-insights-analyzer.py, session-name-generator.py, export-session.py, archive-sessions.py |
| Sync | 6 | task-plan-sync.py, cloud-sync-client.py, backup-context-db.sh, restore-context-db.sh, sync-status.py, offline-queue.py |
| Learning | 5 | skill-pattern-analyzer.py, learning-db-migrate.py, skill-selector.py, learning_db_query.py, skill-health-monitor.py |
| Analytics | 4 | token-economics.py, tool-analytics.py, usage-report.py, trajectory_logging.py |
| Setup | 10 | CODITECT-CORE-INITIAL-SETUP.py, coditect-setup.py, update-coditect.py, verify-install.py, migrate-paths.py, check-version.py, onboard-wizard.py, developer-setup.py, platform-index-db.py, projects-db.py |
| Hooks (db) | 12 | proactive-error-suggester.py, task-tracking-enforcer.py, session-auto-recall.py, component-database-sync.py, pre-backup.py, post-backup.py, pre-restore.py, post-restore.py, orient-context-validator.py, context-query-validator.py, backup-integrity-validator.py, scheduled-backup-monitor.py |
Decision Date: 2026-01-26 Review Date: 2026-02-09 (after Phase 1) Full Migration Complete: 2026-04-06 (Week 12)