Memory Management Guide
Complete guide to CODITECT's anti-forgetting memory system.
Target Audience: All CODITECT users Last Updated: December 22, 2025 Status: Production Ready
Table of Contents
- Why Memory Matters
- Quick Start (5 Minutes)
- The /cx Command: Capture Context
- The /cxq Command: Query Context
- Understanding Entry Types (ADR-025)
- Daily Workflows
- Large Session Processing
- Best Practices
- Troubleshooting
Why Memory Matters
The Anti-Forgetting Problem
Every new Claude Code session starts with a blank slate. Without memory:
| Without Memory | With CODITECT Memory |
|---|---|
| 5-10 min re-explaining context | Instant recall |
| Contradicts past decisions | Remembers what was decided |
| Re-debug same issues | Learns from past fixes |
| Inconsistent code patterns | Reuses proven patterns |
The Solution
CODITECT captures, indexes, and retrieves everything from your sessions:
Session Work → /cx (extract) → ADR-118 DBs (index) → /cxq (recall)
↓ ↓
JSONL Decisions, Patterns,
Sessions Error Solutions
Storage (ADR-118 Four-Tier):
context-storage/sessions.db- Tier 3: Session messages (FTS5)context-storage/org.db- Tier 2: Decisions, patterns, errors (CRITICAL)context-storage/unified_messages.jsonl- Raw message archivecontext-storage/exports-pending/- Unprocessed export files (where exports go first)context-storage/exports-archive/- Processed export files (moved after /cx)
Quick Start (5 Minutes)
Step 1: Capture Current Session
/cx
Extracts all messages, indexes for search, archives exports.
Step 2: Query Your Memory
/cxq --recall "topic you're working on"
Returns: relevant messages + decisions + code patterns + error solutions.
Step 3: Build Knowledge Base (First Time)
/cxq --index-comprehensive # Index ALL 6 entry types (recommended)
/cxq --extract # Extract decisions, patterns, errors
/cxq --embeddings # (Optional) Enable semantic search
Note: Use --index-comprehensive instead of --index to capture all entry types including system events, summaries, and file backups.
The /cx Command: Capture Context
Basic Usage
/cx # Process all sessions and exports
/cx FILE # Process single file
/cx --with-embeddings # Also generate semantic embeddings
/cx --min-size 10 # Only large JSONL files (>10MB)
What /cx Does
- Scans
~/.claude/projects/for JSONL session files (read-only) - Finds export TXT files in
exports-pending/(primary) and project directories - Extracts unique messages using SHA-256 deduplication
- Archives processed export files to
context-storage/exports-archive/ - Auto-indexes into SQLite with FTS5 full-text search
- Auto-extracts knowledge (decisions, patterns, errors)
When to Run
| Scenario | Recommendation |
|---|---|
| End of work session | Always |
| After major milestone | Run to preserve decisions |
| Before extended break | Ensures nothing is lost |
| Weekly maintenance | Good practice |
The /cxq Command: Query Context
Search Modes
Full-Text Search (Default)
/cxq "authentication JWT"
/cxq "test OR spec"
/cxq "def*" # Prefix match
/cxq "NEAR(database migration, 5)" # Words within 5 tokens
Semantic Search (AI-Powered)
/cxq --semantic "how to implement authentication"
/cxq --semantic "database pooling" --threshold 0.5
Finds conceptually similar content even if exact words don't match.
RAG Recall (Recommended)
/cxq --recall "project topic"
Returns combined view:
- Relevant messages (semantic or FTS)
- Related decisions
- Matching code patterns
- Similar error solutions
Knowledge Queries
# Decisions
/cxq --decisions
/cxq --decisions --decision-type architecture
# Code Patterns
/cxq --patterns --language python
/cxq --patterns --pattern-type async
# Error Solutions
/cxq --errors "TypeError"
Browse & Analyze
/cxq --recent 20 # Last 20 messages
/cxq --today # Today's messages
/cxq --stats # Database statistics
/cxq --knowledge-stats # Knowledge base stats
Understanding Entry Types (ADR-025)
Claude Code sessions contain 6 distinct entry types. The comprehensive schema captures ALL types for zero data loss:
The 6 Entry Types
| Entry Type | Description | Use Cases |
|---|---|---|
| user | Your messages to Claude | Finding what you asked |
| assistant | Claude's responses | Finding code, explanations |
| system | System events | Compaction, errors, retries |
| summary | Conversation summaries | Context reconstruction |
| queue-operation | Command queue history | Debugging command execution |
| file-history-snapshot | File backup tracking | Version control within sessions |
Querying by Entry Type
# View distribution of entry types
/cxq --comprehensive-stats
# Query specific entry types
/cxq --entry-type summary # All conversation summaries
/cxq --entry-type system # System events (compaction, errors)
/cxq --entry-type queue-operation # Command queue history
/cxq --entry-type file-history-snapshot # File backup entries
# FTS5 search within entry types
/cxq --search-entries "error" --entry-type system
/cxq --search-entries "compaction"
Comprehensive Indexing
# Index all 6 entry types (first time)
/cxq --index-comprehensive
# Rebuild from scratch
/cxq --rebuild-comprehensive
# Check what's indexed
/cxq --comprehensive-stats
Example output:
Comprehensive Entry Statistics (All 6 Types)
============================================================
Master Entries Table:
Total entries: 171,976
user: 41,823
assistant: 96,007
system: 743
queue-operation: 17,073
summary: 988
file-history-snapshot: 12,342
Why Comprehensive Indexing Matters
Legacy indexing captured only user and assistant messages. This missed:
| Lost Data | Impact |
|---|---|
| System events (0.4%) | Compaction history, error events |
| Summaries (0.5%) | Conversation summaries for context |
| Queue operations (9.3%) | Command history |
| File snapshots (6.7%) | File version history |
Total data loss with legacy indexing: ~16.5%
Use --index-comprehensive to capture 100% of your session data.
Daily Workflows
Morning Session Start
# 1. Recall context
/cxq --recall "feature or project name"
# 2. Check recent decisions
/cxq --decisions --decision-type architecture
# 3. Review error patterns
/cxq --errors "expected error type"
During Development
# Hit an error? Check if solved before
/cxq --errors "the error message"
# Need a code pattern?
/cxq --patterns --language python --pattern-type database
# What did we decide about X?
/cxq "X" --decisions
End of Session
# 1. Export current session
/export
# 2. Capture to knowledge base
/cx
# 3. (Optional) Update embeddings
/cxq --embeddings
Weekly Maintenance
/cxq --extract # Full knowledge extraction
/cxq --embeddings # Generate embeddings
/cxq --knowledge-stats
Large Session Processing
Why Special Processing?
Claude Code JSONL files can exceed 90MB. Standard text export methods fail on these files.
CODITECT solves this with streaming batch processing:
- Smart chunking at safe boundaries
- Global deduplication (removes duplicate messages)
- Watermark-based resume tracking (restart from failure point)
Quick Usage
# Process all large sessions (>10MB)
/process-jsonl-sessions --batch --min-size 10
# Resume from failed processing
/process-jsonl-sessions --resume SESSION_ID
# Check processing status
python3 scripts/core/watermark_tracker.py --list
Using Scripts Directly (No Activation Required)
# 1. Analyze session structure
python3 scripts/core/jsonl_analyzer.py ~/.claude/projects/.../SESSION.jsonl --show-chunks
# 2. Create chunks
python3 scripts/core/session_chunker.py ~/.claude/projects/.../SESSION.jsonl --chunk-dir /tmp/chunks
# 3. Track progress
python3 scripts/core/watermark_tracker.py --session SESSION_ID
Safe Split Points
- High Quality: File history snapshots (
file-history-snapshot) - Preferred - Medium Quality: User message starts - Acceptable
- Low Quality: Assistant end turns - Use sparingly
- UNSAFE: Mid-tool-sequence - Never
When to Use
| Session Size | Method | Reason |
|---|---|---|
| <10 MB | /cx (standard) | Fast, simple |
| 10-100 MB | /process-jsonl-sessions | Streaming with resume |
| >100 MB | Scripts + reduced chunk size | Maximum control |
Best Practices
DO
-
Run /cx after every session
/export && /cx -
Start sessions with recall
/cxq --recall "current project topic" -
Check patterns before writing code
/cxq --patterns --language python "database" -
Search errors before debugging
/cxq --errors "KeyError" -
Backup regularly
./scripts/backup-context-db.sh -
Build embeddings for semantic search
pip install sentence-transformers
/cxq --embeddings
DON'T
- Don't modify database files directly - Use /cx and /cxq only
- Don't skip extraction - Run /cx even for short sessions
- Don't ignore the knowledge base - Check decisions/patterns/errors first
- Don't process small sessions with JSONL batch - Use /cx for <10MB
Troubleshooting
"No results found"
# Rebuild index
/cxq --rebuild
# Try broader search
/cxq "partial term"
/cxq --recent 50
"Semantic search not working"
pip install sentence-transformers
/cxq --embeddings
"Database too large"
Current: 203 MB for 49,000+ messages is healthy. Archive old data if needed (future feature).
"Context not being recalled"
# Always run at session end
/export
/cx
"Database not found"
# Run extraction first
/cx
# Verify created (ADR-118 Tier 3)
ls -la context-storage/sessions.db
# Then query
/cxq --stats
"Knowledge extraction empty"
# Run extraction
/cxq --extract
# Check stats
/cxq --knowledge-stats
# May need more assistant messages
/cxq --count --role assistant
Command Reference
/cx Options
| Option | Description |
|---|---|
FILE | Process single file |
--with-embeddings | Generate semantic embeddings |
--min-size N | Only process files >N MB |
--dry-run | Preview without changes |
/cxq Options
| Option | Description |
|---|---|
QUERY | Full-text search query |
--semantic QUERY | Semantic similarity search |
--recall QUERY | RAG retrieval (messages + knowledge) |
--decisions | Query extracted decisions |
--decision-type TYPE | architecture, technology, api, database, testing, deployment, security |
--patterns | Query code patterns |
--language LANG | python, typescript, rust, etc. |
--pattern-type TYPE | test, async, class, error_handling, database, api |
--errors | Query error solutions |
--embeddings | Generate semantic embeddings |
--extract | Run knowledge extraction |
--rebuild | Rebuild index from scratch |
--stats | Database statistics |
--knowledge-stats | Knowledge base statistics |
--recent N | Last N messages |
--today | Today's messages |
--full | Full content display |
--json | JSON output |
--index-comprehensive | Index ALL 6 entry types |
--rebuild-comprehensive | Rebuild comprehensive index |
--comprehensive-stats | Entry type statistics |
--search-entries QUERY | FTS5 search across all types |
--entry-type TYPE | Filter by entry type |
--threading | Message threading relationships |
--thread-from UUID | Trace thread from UUID |
Summary
| Action | Command | When |
|---|---|---|
| Capture session | /cx | End of every session |
| Recall context | /cxq --recall "topic" | Start of session |
| Search messages | /cxq "query" | Finding specific content |
| Find decisions | /cxq --decisions | Checking past choices |
| Find patterns | /cxq --patterns | Reusing code |
| Find error fixes | /cxq --errors | Solving known issues |
| Build knowledge | /cxq --extract | Weekly maintenance |
| Enable AI search | /cxq --embeddings | After extract |
Remember: Memory management eliminates catastrophic forgetting. Make /cx and /cxq part of your daily workflow.
Script Location: scripts/context-db.py
Databases (ADR-118): context-storage/sessions.db (Tier 3), context-storage/org.db (Tier 2)
Version: 4.0.0 (Anti-Forgetting + Comprehensive Entries)
Related ADRs: ADR-020, ADR-021, ADR-025, ADR-118