Skip to main content

Anthropic Research Summary

Purpose: Consolidated findings from extensive Anthropic Claude research Scope: Agent patterns, memory systems, multi-session workflows, tool use, prompt engineering Sources: 48 research documents + 35 external references Last Updated: December 22, 2025


Executive Summary

Key Findings

  1. Multi-Session Continuity is critical for long-running agents

    • Progressive disclosure patterns reduce token usage by 60-75%
    • State management via JSON + Markdown checkpoints proven effective
    • Git-based session recovery enables cross-context continuity
  2. Agent Skills Architecture scales better than monolithic agents

    • Metadata-driven discovery reduces prompt size by 75-85%
    • Hierarchical skill organization (Tool Search Tool pattern)
    • Dynamic activation based on task requirements
  3. Memory Systems prevent catastrophic forgetting

    • Multi-tier memory (episodic, semantic, procedural)
    • Context database with 584MB+ of session history
    • Deduplication yields 93% size reduction while preserving uniqueness
  4. Tool Use Patterns maximize efficiency

    • Parallel tool calling for independent operations
    • Strategic tool sequencing for dependencies
    • Error handling with circuit breakers and retries

Research Categories

1. Multi-Session Pattern Research

Primary Documents:

Key Patterns:

Progressive Disclosure

Session 1 (Initial Context):
- High-level overview only
- Core concepts and entry points
- "See detailed docs at X for Y"

Session 2+ (Just-in-Time):
- Load only needed details when requested
- Reference previous session state
- Minimize redundant context

Benefits:

  • 60-75% token reduction
  • Faster response times
  • Preserved context accuracy

State Checkpoint Pattern

# State files
progress.json # Machine-readable progress
progress.txt # Human-readable summary
session-state.md # Markdown checkpoint

# Git-based recovery
git tag session-23-checkpoint
git checkout session-23-checkpoint

Implementation in CODITECT:

  • /cx command captures session exports
  • MEMORY-CONTEXT/sessions/ stores checkpoints
  • context.db provides queryable history

2. Agent Skills Architecture

Primary Documents:

Core Concepts:

Tool Search Tool Pattern (75-85% reduction)

# Instead of loading all 189 skills:
skills/
ai-curriculum-specialist/SKILL.md # Loaded on-demand
assessment-creation-agent/SKILL.md # Not loaded unless needed

# Agent sees only metadata:
{
"name": "ai-curriculum-specialist",
"summary": "Educational content generation",
"when_to_use": "Creating curriculum modules",
"capabilities": ["content-generation", "assessment-design"]
}

Hierarchical Organization

Level 1: Agent Categories (research, development, qa)
Level 2: Specialized Agents (50+ agents)
Level 3: Skills (189 reusable patterns)
Level 4: Commands (72 slash commands)

Metadata-Driven Discovery

{
"type": "agent",
"name": "codi-documentation-writer",
"capabilities": ["api-docs", "user-guides", "technical-writing"],
"audience": ["developers", "end-users"],
"activation_required": true,
"tokens": "~3000"
}

CODITECT Implementation:

  • config/framework-registry.json - Master catalog
  • config/component-activation-status.json - Dynamic activation
  • scripts/update-component-activation.py - Management tool

3. Memory Systems Research

Primary Documents:

Memory Framework Types:

1. Episodic Memory (Session History)

  • What: Chronological record of all interactions
  • Storage: context.db (584MB), unified_messages.jsonl (112MB)
  • Retention: Full history with deduplication
  • Access: /cxq --recent 200, /cxq --today

2. Semantic Memory (Knowledge Base)

  • What: Extracted decisions, patterns, solutions
  • Storage: knowledge_base table in context.db
  • Retention: Permanent with version control
  • Access: /cxq --recall "topic", /cxq --knowledge-stats

3. Procedural Memory (Skills & Patterns)

  • What: Reusable workflows and automation
  • Storage: skills/ directory, workflow library
  • Retention: Git-versioned, evolving
  • Access: Component activation system

Comparative Analysis:

FrameworkStrengthsWeaknessesCODITECT Equivalent
mem0Multi-user, persistentExternal dependencyContext.db
LangMemLangChain integrationComplex setupSession exports
MemobaseVector searchResource-intensiveKnowledge base
OpenAI SwarmAgent coordinationSwarm-specificMulti-agent orchestration

CODITECT Advantages:

  • Zero external dependencies
  • SQLite-based (584MB, fast queries)
  • Git-integrated (version control + backups)
  • Cloud backup (GCS: 90-day retention)

4. Claude Code Best Practices

Primary Documents:

Proven Patterns:

1. CLAUDE.md Organization

# Hierarchical CLAUDE.md Structure
ROOT/CLAUDE.md # Master orchestration
├── docs/project/CLAUDE.md # Project-specific context
├── docs/adrs/CLAUDE.md # ADR navigation
└── scripts/CLAUDE.md # Script execution

Benefits: Context scoping, role clarity, token efficiency

2. Frontmatter Standards

---
title: "Document Title"
audience: [user|contributor]
type: [guide|reference|research|workflow]
tokens: ~X000
summary: "One-line AI agent summary"
when_to_read: "Specific use case"
keywords: [searchable, terms]
---

Why this works:

  • AI agents can quickly assess relevance
  • Token budgeting (know doc size before loading)
  • Searchability (keyword-driven discovery)
  • Role-based filtering (user vs contributor)

3. Communication Patterns

# ❌ Verbose (wastes tokens)
"I understand you want me to implement feature X. Let me explain
my approach in detail before I begin..."

# ✅ Concise (efficient)
"Implementing feature X via Pattern Y. Starting now."

# ✅ Structured (actionable)
**Plan:** 3 steps (A, B, C)
**Status:** Step A in progress
**Next:** Step B after approval

5. Tool Use Patterns

Primary Documents:

Optimal Patterns:

Parallel Tool Calling (Independent Operations)

# ✅ Efficient: Call in parallel
<function_calls>
<invoke name="Read"><parameter name="file_path">file1.md