Skip to main content

Memory Management Guide

Complete guide to CODITECT's anti-forgetting memory system.

Target Audience: All CODITECT users Last Updated: December 22, 2025 Status: Production Ready


Table of Contents

  1. Why Memory Matters
  2. Quick Start (5 Minutes)
  3. The /cx Command: Capture Context
  4. The /cxq Command: Query Context
  5. Understanding Entry Types (ADR-025)
  6. Daily Workflows
  7. Large Session Processing
  8. Best Practices
  9. Troubleshooting

Why Memory Matters

The Anti-Forgetting Problem

Every new Claude Code session starts with a blank slate. Without memory:

Without MemoryWith CODITECT Memory
5-10 min re-explaining contextInstant recall
Contradicts past decisionsRemembers what was decided
Re-debug same issuesLearns from past fixes
Inconsistent code patternsReuses proven patterns

The Solution

CODITECT captures, indexes, and retrieves everything from your sessions:

Session Work → /cx (extract) → ADR-118 DBs (index) → /cxq (recall)
↓ ↓
JSONL Decisions, Patterns,
Sessions Error Solutions

Storage (ADR-118 Four-Tier):

  • context-storage/sessions.db - Tier 3: Session messages (FTS5)
  • context-storage/org.db - Tier 2: Decisions, patterns, errors (CRITICAL)
  • context-storage/unified_messages.jsonl - Raw message archive
  • context-storage/exports-pending/ - Unprocessed export files (where exports go first)
  • context-storage/exports-archive/ - Processed export files (moved after /cx)

Quick Start (5 Minutes)

Step 1: Capture Current Session

/cx

Extracts all messages, indexes for search, archives exports.

Step 2: Query Your Memory

/cxq --recall "topic you're working on"

Returns: relevant messages + decisions + code patterns + error solutions.

Step 3: Build Knowledge Base (First Time)

/cxq --index-comprehensive  # Index ALL 6 entry types (recommended)
/cxq --extract # Extract decisions, patterns, errors
/cxq --embeddings # (Optional) Enable semantic search

Note: Use --index-comprehensive instead of --index to capture all entry types including system events, summaries, and file backups.


The /cx Command: Capture Context

Basic Usage

/cx                     # Process all sessions and exports
/cx FILE # Process single file
/cx --with-embeddings # Also generate semantic embeddings
/cx --min-size 10 # Only large JSONL files (>10MB)

What /cx Does

  1. Scans ~/.claude/projects/ for JSONL session files (read-only)
  2. Finds export TXT files in exports-pending/ (primary) and project directories
  3. Extracts unique messages using SHA-256 deduplication
  4. Archives processed export files to context-storage/exports-archive/
  5. Auto-indexes into SQLite with FTS5 full-text search
  6. Auto-extracts knowledge (decisions, patterns, errors)

When to Run

ScenarioRecommendation
End of work sessionAlways
After major milestoneRun to preserve decisions
Before extended breakEnsures nothing is lost
Weekly maintenanceGood practice

The /cxq Command: Query Context

Search Modes

Full-Text Search (Default)

/cxq "authentication JWT"
/cxq "test OR spec"
/cxq "def*" # Prefix match
/cxq "NEAR(database migration, 5)" # Words within 5 tokens

Semantic Search (AI-Powered)

/cxq --semantic "how to implement authentication"
/cxq --semantic "database pooling" --threshold 0.5

Finds conceptually similar content even if exact words don't match.

/cxq --recall "project topic"

Returns combined view:

  • Relevant messages (semantic or FTS)
  • Related decisions
  • Matching code patterns
  • Similar error solutions

Knowledge Queries

# Decisions
/cxq --decisions
/cxq --decisions --decision-type architecture

# Code Patterns
/cxq --patterns --language python
/cxq --patterns --pattern-type async

# Error Solutions
/cxq --errors "TypeError"

Browse & Analyze

/cxq --recent 20        # Last 20 messages
/cxq --today # Today's messages
/cxq --stats # Database statistics
/cxq --knowledge-stats # Knowledge base stats

Understanding Entry Types (ADR-025)

Claude Code sessions contain 6 distinct entry types. The comprehensive schema captures ALL types for zero data loss:

The 6 Entry Types

Entry TypeDescriptionUse Cases
userYour messages to ClaudeFinding what you asked
assistantClaude's responsesFinding code, explanations
systemSystem eventsCompaction, errors, retries
summaryConversation summariesContext reconstruction
queue-operationCommand queue historyDebugging command execution
file-history-snapshotFile backup trackingVersion control within sessions

Querying by Entry Type

# View distribution of entry types
/cxq --comprehensive-stats

# Query specific entry types
/cxq --entry-type summary # All conversation summaries
/cxq --entry-type system # System events (compaction, errors)
/cxq --entry-type queue-operation # Command queue history
/cxq --entry-type file-history-snapshot # File backup entries

# FTS5 search within entry types
/cxq --search-entries "error" --entry-type system
/cxq --search-entries "compaction"

Comprehensive Indexing

# Index all 6 entry types (first time)
/cxq --index-comprehensive

# Rebuild from scratch
/cxq --rebuild-comprehensive

# Check what's indexed
/cxq --comprehensive-stats

Example output:

Comprehensive Entry Statistics (All 6 Types)
============================================================

Master Entries Table:
Total entries: 171,976
user: 41,823
assistant: 96,007
system: 743
queue-operation: 17,073
summary: 988
file-history-snapshot: 12,342

Why Comprehensive Indexing Matters

Legacy indexing captured only user and assistant messages. This missed:

Lost DataImpact
System events (0.4%)Compaction history, error events
Summaries (0.5%)Conversation summaries for context
Queue operations (9.3%)Command history
File snapshots (6.7%)File version history

Total data loss with legacy indexing: ~16.5%

Use --index-comprehensive to capture 100% of your session data.


Daily Workflows

Morning Session Start

# 1. Recall context
/cxq --recall "feature or project name"

# 2. Check recent decisions
/cxq --decisions --decision-type architecture

# 3. Review error patterns
/cxq --errors "expected error type"

During Development

# Hit an error? Check if solved before
/cxq --errors "the error message"

# Need a code pattern?
/cxq --patterns --language python --pattern-type database

# What did we decide about X?
/cxq "X" --decisions

End of Session

# 1. Export current session
/export

# 2. Capture to knowledge base
/cx

# 3. (Optional) Update embeddings
/cxq --embeddings

Weekly Maintenance

/cxq --extract     # Full knowledge extraction
/cxq --embeddings # Generate embeddings
/cxq --knowledge-stats

Large Session Processing

Why Special Processing?

Claude Code JSONL files can exceed 90MB. Standard text export methods fail on these files.

CODITECT solves this with streaming batch processing:

  • Smart chunking at safe boundaries
  • Global deduplication (removes duplicate messages)
  • Watermark-based resume tracking (restart from failure point)

Quick Usage

# Process all large sessions (>10MB)
/process-jsonl-sessions --batch --min-size 10

# Resume from failed processing
/process-jsonl-sessions --resume SESSION_ID

# Check processing status
python3 scripts/core/watermark_tracker.py --list

Using Scripts Directly (No Activation Required)

# 1. Analyze session structure
python3 scripts/core/jsonl_analyzer.py ~/.claude/projects/.../SESSION.jsonl --show-chunks

# 2. Create chunks
python3 scripts/core/session_chunker.py ~/.claude/projects/.../SESSION.jsonl --chunk-dir /tmp/chunks

# 3. Track progress
python3 scripts/core/watermark_tracker.py --session SESSION_ID

Safe Split Points

  1. High Quality: File history snapshots (file-history-snapshot) - Preferred
  2. Medium Quality: User message starts - Acceptable
  3. Low Quality: Assistant end turns - Use sparingly
  4. UNSAFE: Mid-tool-sequence - Never

When to Use

Session SizeMethodReason
<10 MB/cx (standard)Fast, simple
10-100 MB/process-jsonl-sessionsStreaming with resume
>100 MBScripts + reduced chunk sizeMaximum control

Best Practices

DO

  1. Run /cx after every session

    /export && /cx
  2. Start sessions with recall

    /cxq --recall "current project topic"
  3. Check patterns before writing code

    /cxq --patterns --language python "database"
  4. Search errors before debugging

    /cxq --errors "KeyError"
  5. Backup regularly

    ./scripts/backup-context-db.sh
  6. Build embeddings for semantic search

    pip install sentence-transformers
    /cxq --embeddings

DON'T

  1. Don't modify database files directly - Use /cx and /cxq only
  2. Don't skip extraction - Run /cx even for short sessions
  3. Don't ignore the knowledge base - Check decisions/patterns/errors first
  4. Don't process small sessions with JSONL batch - Use /cx for <10MB

Troubleshooting

"No results found"

# Rebuild index
/cxq --rebuild

# Try broader search
/cxq "partial term"
/cxq --recent 50

"Semantic search not working"

pip install sentence-transformers
/cxq --embeddings

"Database too large"

Current: 203 MB for 49,000+ messages is healthy. Archive old data if needed (future feature).

"Context not being recalled"

# Always run at session end
/export
/cx

"Database not found"

# Run extraction first
/cx

# Verify created (ADR-118 Tier 3)
ls -la context-storage/sessions.db

# Then query
/cxq --stats

"Knowledge extraction empty"

# Run extraction
/cxq --extract

# Check stats
/cxq --knowledge-stats

# May need more assistant messages
/cxq --count --role assistant

Command Reference

/cx Options

OptionDescription
FILEProcess single file
--with-embeddingsGenerate semantic embeddings
--min-size NOnly process files >N MB
--dry-runPreview without changes

/cxq Options

OptionDescription
QUERYFull-text search query
--semantic QUERYSemantic similarity search
--recall QUERYRAG retrieval (messages + knowledge)
--decisionsQuery extracted decisions
--decision-type TYPEarchitecture, technology, api, database, testing, deployment, security
--patternsQuery code patterns
--language LANGpython, typescript, rust, etc.
--pattern-type TYPEtest, async, class, error_handling, database, api
--errorsQuery error solutions
--embeddingsGenerate semantic embeddings
--extractRun knowledge extraction
--rebuildRebuild index from scratch
--statsDatabase statistics
--knowledge-statsKnowledge base statistics
--recent NLast N messages
--todayToday's messages
--fullFull content display
--jsonJSON output
--index-comprehensiveIndex ALL 6 entry types
--rebuild-comprehensiveRebuild comprehensive index
--comprehensive-statsEntry type statistics
--search-entries QUERYFTS5 search across all types
--entry-type TYPEFilter by entry type
--threadingMessage threading relationships
--thread-from UUIDTrace thread from UUID

Summary

ActionCommandWhen
Capture session/cxEnd of every session
Recall context/cxq --recall "topic"Start of session
Search messages/cxq "query"Finding specific content
Find decisions/cxq --decisionsChecking past choices
Find patterns/cxq --patternsReusing code
Find error fixes/cxq --errorsSolving known issues
Build knowledge/cxq --extractWeekly maintenance
Enable AI search/cxq --embeddingsAfter extract

Remember: Memory management eliminates catastrophic forgetting. Make /cx and /cxq part of your daily workflow.


Script Location: scripts/context-db.py Databases (ADR-118): context-storage/sessions.db (Tier 3), context-storage/org.db (Tier 2) Version: 4.0.0 (Anti-Forgetting + Comprehensive Entries) Related ADRs: ADR-020, ADR-021, ADR-025, ADR-118