Skip to main content

ADR-046: Continual Learning System

Status

Accepted - December 28, 2025

Context

CODITECT agents and skills execute thousands of operations across sessions. Without a learning mechanism, the same mistakes are repeated, successful patterns aren't amplified, and the framework doesn't improve over time.

Problem Statement

  1. No Pattern Recognition: Successful execution patterns aren't captured
  2. Repeated Mistakes: Same errors occur across sessions
  3. Static Skills: Skills don't evolve based on real-world usage
  4. Lost Context: Session learnings don't persist
  5. No Feedback Loop: No mechanism to improve based on outcomes

Requirements

  • Capture successful patterns from session execution
  • Detect anti-patterns and failures
  • Generate improvement recommendations for skills
  • Persist learnings across sessions
  • Support manual and automatic learning triggers

Decision

Implement a Continual Learning System with three phases: Observation, Analysis, and Application.

Architecture

┌─────────────────────────────────────────────────────────────────────┐
│ CONTINUAL LEARNING SYSTEM │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ SESSION EXECUTION │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ PHASE 1: OBSERVATION │ │
│ │ │ │
│ │ Collect: │ │
│ │ • Tool invocations (success/failure) │ │
│ │ • Execution patterns (sequence, timing) │ │
│ │ • Error messages and recovery actions │ │
│ │ • User corrections and feedback │ │
│ │ • Skill usage frequency and outcomes │ │
│ │ │ │
│ │ Storage: sessions.db (Tier 3) │ │
│ └───────────────────────┬─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ PHASE 2: ANALYSIS │ │
│ │ │ │
│ │ Pattern Detection: │ │
│ │ • Successful sequences (read → analyze → implement) │ │
│ │ • Failure patterns (retry loops, timeouts) │ │
│ │ • Anti-patterns (excessive retries, context confusion) │ │
│ │ │ │
│ │ Triggers: │ │
│ │ • Session end (automatic retrospective) │ │
│ │ • Manual /retrospective command │ │
│ │ • Scheduled daily analysis │ │
│ │ │ │
│ │ Analysis Engine: skill-pattern-analyzer.py │ │
│ └───────────────────────┬─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ PHASE 3: APPLICATION │ │
│ │ │ │
│ │ Learning Outputs: │ │
│ │ • Skill improvement recommendations │ │
│ │ • Pattern library updates │ │
│ │ • Error-solution pairs (for future retrieval) │ │
│ │ • Anti-pattern warnings │ │
│ │ │ │
│ │ Storage: org.db (Tier 2 - irreplaceable) │ │
│ │ Tables: skill_learnings, error_solutions, decisions │ │
│ └───────────────────────┬─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ SESSION START (NEXT SESSION) │ │
│ │ │ │
│ │ Load Learnings: │ │
│ │ • Retrieve relevant skill_learnings from org.db │ │
│ │ • Apply pattern preferences │ │
│ │ • Avoid known anti-patterns │ │
│ │ • Recall error-solution pairs when errors occur │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

Learning Types

TypeDescriptionStoragePersistence
Skill LearningsImprovements to skill executionorg.dbPermanent
Error SolutionsError message → solution pairsorg.dbPermanent
PatternsSuccessful execution sequencesorg.dbPermanent
Anti-PatternsFailure patterns to avoidorg.dbPermanent
DecisionsArchitectural/design decisionsorg.dbPermanent
Session MetricsExecution statisticssessions.dbSession-scoped

Data Model (ADR-118 Compliant)

-- In org.db (Tier 2 - IRREPLACEABLE)

CREATE TABLE IF NOT EXISTS skill_learnings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
skill_name TEXT NOT NULL,
learning_type TEXT NOT NULL, -- improvement, warning, pattern
description TEXT NOT NULL,
confidence REAL DEFAULT 0.5,
times_applied INTEGER DEFAULT 0,
success_rate REAL DEFAULT 0.0,
created_at TEXT DEFAULT (datetime('now')),
updated_at TEXT DEFAULT (datetime('now'))
);

CREATE TABLE IF NOT EXISTS error_solutions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
error_signature TEXT NOT NULL, -- Normalized error pattern
error_type TEXT, -- TypeError, ImportError, etc.
solution_description TEXT NOT NULL,
solution_code TEXT, -- Optional code snippet
success_count INTEGER DEFAULT 1,
failure_count INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now')),
UNIQUE(error_signature)
);

CREATE TABLE IF NOT EXISTS execution_patterns (
id INTEGER PRIMARY KEY AUTOINCREMENT,
pattern_name TEXT NOT NULL,
pattern_type TEXT NOT NULL, -- success, anti-pattern
tool_sequence TEXT NOT NULL, -- JSON array of tool names
context_description TEXT,
occurrence_count INTEGER DEFAULT 1,
success_rate REAL DEFAULT 0.0,
created_at TEXT DEFAULT (datetime('now'))
);

-- In sessions.db (Tier 3 - regenerable)

CREATE TABLE IF NOT EXISTS session_observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
timestamp TEXT NOT NULL,
observation_type TEXT NOT NULL, -- tool_use, error, correction, feedback
tool_name TEXT,
success INTEGER, -- 1 for success, 0 for failure
duration_ms INTEGER,
details TEXT, -- JSON with additional context
created_at TEXT DEFAULT (datetime('now'))
);

Pattern Detection

from dataclasses import dataclass
from typing import List, Tuple
from collections import Counter
import sqlite3

@dataclass
class DetectedPattern:
pattern_type: str # "success" or "anti-pattern"
tool_sequence: List[str]
occurrence_count: int
success_rate: float
description: str

class PatternAnalyzer:
def __init__(self, sessions_db: str, org_db: str):
self.sessions_db = sessions_db
self.org_db = org_db

def analyze_session(self, session_id: str) -> List[DetectedPattern]:
"""Analyze a session for patterns and anti-patterns."""

# Get all observations for session
observations = self._get_session_observations(session_id)

# Detect tool sequences
sequences = self._extract_sequences(observations)

# Identify successful patterns
success_patterns = [
seq for seq in sequences
if self._calculate_success_rate(seq) > 0.8
]

# Identify anti-patterns
anti_patterns = self._detect_anti_patterns(observations)

# Store findings in org.db
self._store_patterns(success_patterns, anti_patterns)

return success_patterns + anti_patterns

def _detect_anti_patterns(self, observations) -> List[DetectedPattern]:
"""Detect known anti-patterns in observations."""

anti_patterns = []

# Anti-pattern 1: Excessive retries (same tool fails 3+ times)
tool_failures = Counter(
obs['tool_name'] for obs in observations
if not obs['success']
)
for tool, count in tool_failures.items():
if count >= 3:
anti_patterns.append(DetectedPattern(
pattern_type="anti-pattern",
tool_sequence=[tool] * count,
occurrence_count=count,
success_rate=0.0,
description=f"Excessive retries: {tool} failed {count} times"
))

# Anti-pattern 2: Context confusion (rapid tool switching)
rapid_switches = self._detect_rapid_switching(observations)
if rapid_switches:
anti_patterns.append(DetectedPattern(
pattern_type="anti-pattern",
tool_sequence=rapid_switches,
occurrence_count=1,
success_rate=0.3,
description="Context confusion: rapid tool switching detected"
))

return anti_patterns

Session Retrospective

def run_session_retrospective(session_id: str) -> dict:
"""
Run end-of-session retrospective to extract learnings.

Called by:
- PostSession hook (automatic)
- /retrospective command (manual)
"""

analyzer = PatternAnalyzer(
sessions_db=get_sessions_db_path(),
org_db=get_org_db_path()
)

# Analyze session
patterns = analyzer.analyze_session(session_id)

# Extract error-solution pairs
error_solutions = extract_error_solutions(session_id)

# Generate skill improvements
improvements = generate_skill_improvements(patterns)

# Store in org.db
store_learnings(
patterns=patterns,
error_solutions=error_solutions,
improvements=improvements
)

return {
"session_id": session_id,
"patterns_detected": len(patterns),
"error_solutions_extracted": len(error_solutions),
"skill_improvements": len(improvements),
"summary": generate_retrospective_summary(patterns, improvements)
}

Commands

# View skill health dashboard
/optimize-skills
# Shows: skill usage, success rates, improvement recommendations

# Full analysis with recommendations
/optimize-skills --full

# Run session retrospective manually
/retrospective

# Query learnings
/cxq --decisions # List decisions from org.db
/cxq --patterns --language python # Code patterns
/cxq --errors "TypeError" # Error-solution pairs

Hook Integration

# hooks/session-retrospective.py
"""PostSession hook for automatic learning extraction."""

import json
import sys
from continual_learning import run_session_retrospective

def main():
input_data = json.loads(sys.stdin.read())

if input_data.get("event") == "session.end":
session_id = input_data.get("session_id")
result = run_session_retrospective(session_id)

print(json.dumps({
"status": "success",
"learnings_extracted": result["patterns_detected"],
"summary": result["summary"]
}))

if __name__ == "__main__":
main()

Implementation

Files

FilePurpose
scripts/skill-pattern-analyzer.pyPattern detection and analysis
hooks/session-retrospective.pyPostSession learning extraction
commands/optimize-skills.mdSkill health dashboard command
commands/retrospective.mdManual retrospective command
skills/skill-improvement-tracker/SKILL.mdSkill tracking methodology

Skill Improvement Loop

SESSION START              SESSION END
│ │
▼ ▼
┌──────────┐ ┌──────────────┐
│ Load │◀───────────│ Run │
│Learnings │ │Retrospective │
└────┬─────┘ └──────────────┘


┌──────────┐ ┌──────────┐ ┌──────────────┐
│ Track │──▶│ Detect │──▶│ Generate │
│ Skills │ │ Patterns │ │ Improvements │
└──────────┘ └──────────┘ └──────────────┘

Consequences

Positive

  1. Self-Improvement: Framework gets better over time
  2. Error Recovery: Known errors have pre-computed solutions
  3. Pattern Reuse: Successful patterns are amplified
  4. Reduced Repetition: Same mistakes aren't repeated

Negative

  1. Storage Growth: org.db grows with learnings
  2. False Positives: Some patterns may be coincidental
  3. Staleness: Old learnings may become outdated

Risks

RiskMitigation
Learning wrong patternsConfidence scoring + manual review
Storage bloatPeriodic cleanup of low-value learnings
Outdated learningsDecay factor for old patterns

Author: CODITECT Team Approved: December 28, 2025 Migration: Migrated from cloud-infra per ADR-150 on 2026-02-03